Skip to content

CNI Customization

CNI Providers for Kubernetes

Networking is a critical component that ensures smooth communication between containers, services, and applications. To make this process simpler and more flexible, Kubernetes clusters support various Container Network Interface (CNI) providers. These providers enable the configuration of networking plugins that help manage networking resources and communication across containers.

Upstream Kubernetes clusters, in particular, support three prominent CNI providers: Cilium, Calico, and Kube-OVN. Each of these providers has its own unique set of features, allowing users to select the best networking solution for their specific needs.

💡 Tip: Users can add any CNI when creating an addon; however, Cilium, Calico, and Kube-OVN are thoroughly qualified and tested to work seamlessly.

Users can choose and customize their preferred CNI provider's configuration through multiple interfaces, including UI, Terraform, RCTL, and API, ensuring seamless integration with their Kubernetes environments.

Calico

Calico is renowned for its flexibility in IP address management and is widely used to connect virtual machines or containers. It offers robust networking capabilities through its CNI plugin, which integrates seamlessly with Kubernetes, enabling users to efficiently manage networking across their containerized environments.

Cilium

Cilium uses eBPF (Extended Berkeley Packet Filter) technology to provide high-performance networking, enhanced security, and deep network visibility. It is especially beneficial for users looking for advanced load balancing, security policies, and observability in cloud-native environments.

⚠️ Important Note

  • Configuration Guide for Cilium CNI based on Versions:

    For Cilium version 1.16.3 or later, use the following configuration in the values file:

    k8sServiceHost: "auto"  
    k8sServicePort: "6443"  
    

    For older versions, use the following configuration in the values file:

    k8sServiceHost: "k8master.service.consul"  
    k8sServicePort: "6443"  
    

  • Workload Access Configuration:

    To enable access to workloads from external sources, such as external DNS, set the kubeProxyReplacement value to strict in the Cilium CNI values file. By default, this value is set to false, which restricts access to workloads.

Kube-OVN

Kube-OVN integrates OVN (Open Virtual Network) with Kubernetes, providing a solution for advanced networking setups. It supports both overlay and underlay networking, centralized IP management, and advanced network policies, making it ideal for users who need scalable, reliable, and highly customizable network configurations.


Why Is This Useful?

The introduction of multiple CNI providers allows users to choose a networking solution that best fits their specific requirements, whether it’s for performance, security, or flexibility. For instance, users who prioritize high security and advanced network observability might prefer Cilium, while those requiring simplified IP address management could lean toward Calico.

Additionally, Kube-OVN is a great choice for users who need more complex network setups, such as integration with OVN for large-scale networking environments that also require detailed control over network policies. The ability to choose between these options means that users are not forced into a one-size-fits-all solution and can instead fine-tune their clusters to meet the demands of their workloads.


CNI Customization

CNI Customization

Previously, users were limited to selecting Container Network Interfaces (CNIs) from a predefined CNI list, which did not allow for any customization of CNI values. With the latest enhancement, users now have the flexibility to customize CNI values by leveraging addons and blueprints.

⚠️ Important Note: Required Labels for CNI Add-ons

When creating a new add-on, it is mandatory to include the following labels to ensure proper configuration:

  1. key: rafay.type and value: cni
  2. key: rafay.cni.name and one of the following values:
    • value: cilium
    • value: calico
    • value: kube-ovn

There are two approaches available for customizing CNI values:

Attaching Helm Chart and Value Files

Users can upload the required Helm chart and value files for their chosen CNI through any interface—UI, RCTL, API, or Terraform. This method provides flexibility to customize the CNI values based on specific environment requirements. It’s important to note that adding labels is mandatory when creating an CNI add-on, as these labels are essential for proper configuration.

Here is an example of creating an add-on with the Kube-OVN CNI. Similarly, add-ons can be created with Cilium and Calico CNIs

To customize Kube-ovn CNI values using Helm charts and value files, follow these steps:

Step 1: Create a Namespace

  • Create a Namespace

Auto Approve Nodes

Step 2: Create a Add-On

  • Create a New Add-on with the previously created namespace and click Create

Auto Approve Nodes

Example Labels for This Case: - key: rafay.type and value: cni - key: rafay.cni.name and value: kube-ovn

Auto Approve Nodes

  • Once the labels are added, click New Version
  • Provide the version name, and upload the kube-ovn helm chart along with the values file

Auto Approve Nodes

  • To customize the values of the Kube-OVN CNI, click the edit icon and modify the required values in the editor, as shown below

🔑 Key Point:
When using Kube-OVN on HA clusters, add the master node IPs (comma-separated) in the values.yaml file before provisioning. Ensure that the replica count matches the number of IPs provided

Auto Approve Nodes

For the new version of Kube-OVN 1.13, master node IPs are no longer required, as they are automatically handled by Kube-OVN by default

⚠️ Important Note: CIDR Configuration for Cilium and Calico CNIs

  1. For Cilium CNI, the clusterPoolIPv4PodCIDRList field in the Helm values file must match the Pod Subnet (Cluster Networking Pod Subnet) specified during the cluster provisioning

  2. For Calico CNI, the cidrs: [ <Node-IP-cidr> ] field in the Calico Helm values file must match the same Pod Subnet

Auto Approve Nodes

  • Click Update if any modifications are made
  • Click Save Changes to complete the add-on creation

Here is the newly created add-on with the Kube-OVN CNI Helm charts and value files.

Auto Approve Nodes

Step 3: Create a Blueprint

  • Once the add-on is created, create a blueprint
  • Provide the required details in the General section and click Configure Add-Ons
  • Select the newly created add-on kube-ovn and the corresponding version from the drop-down
  • Click Save Changes

Auto Approve Nodes

The blueprint is now created with the Kube-OVN CNI add-on

Auto Approve Nodes

Step 4: Create a Cluster

During cluster creation, select the newly created blueprint kube-ovn, provide the necessary configuration details, and proceed with provisioning.

Auto Approve Nodes

⚠️ Key Point:
When adding a CNI-based blueprint, select CNI-via-Blueprint from Advanced Settings -> Cluster Networking

Auto Approve Nodes

Selecting any other CNI option from Cluster Networking will result in an error, Cluster update failed. The CNI provider 'Calico-3.26.1' conflicts with the blueprint's CNI as primary CNI. Configure only one

Once the cluster is created, run the command kubectl get pods -A to view the Kube-OVN CNI running status, as shown in the image below.

Auto Approve Nodes

Using Predefined Add-On CNI Catalogs

For users who prefer a simpler approach, Cilium, Calico, and Kube-OVN are available through the add-on catalog. In this case, users don’t need to upload individual Helm charts and value files, as these CNIs are already packaged with the necessary resources in the catalog. Users can create a namespace and simply select the desired CNI package from the add-on catalog while creating the add-on, streamlining the process and ensuring all required files are included.

Auto Approve Nodes

Once the add-on with the required CNI is created, add the mandatory labels, and it becomes available for use during blueprint creation. Users can incorporate this add-on when defining a new blueprint (proceed from Step 3), which can then be applied during cluster creation to deploy the customized CNI settings.


Day-2 Operations

In addition, users now have the option to modify CNI values as part of Day 2 operations. To do this, create a new version of the addon by uploading an updated Helm chart and value files or by editing the existing value file. Below is an example where the ENABLE_ECMP value is modified from true to false. Click Update to apply the changes

Auto Approve Nodes

Update the cluster blueprint with the new addon version

Auto Approve Nodes

Now, update the cluster blueprint to deploy the latest or modified Kube-OVN CNI to the cluster. This enhancement provides added flexibility for ongoing network configuration adjustments that were previously unavailable.

Auto Approve Nodes

Once the blueprint is updated, users can view the status of the latest blueprint deployment to the cluster, as shown below

Auto Approve Nodes


Migration to Blueprint-Based CNI Configuration

🚀 For older upstream clusters currently using Cilium or Calico without a blueprint-based setup, migrating to a blueprint-based configuration for upgrades or other Day 2 operations requires attention to the following:

Key Steps for Migration

To ensure a seamless migration, follow these highlighted steps:

  • Maintain Addon Compatibility: Ensure the addon name remains unchanged to maintain compatibility.
  • Add the Required Labels: Include the following labels in the configuration:
  • key: rafay.type and value: cni
  • key: rafay.cni.name and value: cilium (or) calico
  • Set the Namespace: Configure the namespace for the addon as kube-system