BYOCNI with Cilium
Overview¶
Bring Your Own CNI (BYOCNI) is an AKS networking mode that lets you skip the default Azure CNI and install a custom CNI plugin of your choice — giving you full control over pod networking, IP management, and network policies.
In this guide, we will walk through how to deploy Cilium as a custom CNI using the BYOCNI approach, so that you can fully customize Cilium's configuration to suit your networking requirements.
Provision an AKS cluster with Bring Your Own CNI (BYOCNI) by installing Cilium as a custom CNI plugin through the cluster blueprint.
This workflow uses the existing namespace → add-on → blueprint → cluster provisioning flow, with the additional requirement to configure the following during Day 0 cluster provisioning:
networkPlugin: nonepodCidr
The selected blueprint must include CNI add-on labeled for Cilium. During provisioning, the platform installs the CNI using the blueprint-resolved Helm chart and values before marking the cluster as successfully provisioned.
Provisioning is considered complete only after:
- the Cilium components are installed successfully
- cluster nodes reach Ready state
Pre-requisites¶
Before creating the cluster, ensure the following:
- Download the Cilium Helm chart
- Prepare the
values.yamlfile - Identify the required pod CIDR range
Step 1: Create Namespace¶
Create a namespace named kube-system.
Note: The namespace does not need to be published.
Step 2: Create Cilium Add-On¶
- Create a Helm 3 type add-on in the controller.
- Download the Cilium Helm chart (required version) from the Cilium charts repository and upload it to the add-on.
- Upload the
values.yamlconfiguration file.
# Chart: cilium/cilium
# Must match AKS: spec.managedCluster.properties.networkProfile.podCidr
kubeProxyReplacement: false
routingMode: tunnel
tunnelProtocol: vxlan
ipam:
mode: cluster-pool
operator:
clusterPoolIPv4PodCIDRList:
- "10.244.0.0/16"
clusterPoolIPv4MaskSize: 24
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"
- operator: Exists
operator:
replicas: 1
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoSchedule"
- operator: Exists
envoy:
enabled: true
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
k8s-app: cilium
topologyKey: kubernetes.io/hostname
Step 2.1: Add Labels¶
After creating the add-on, add the following mandatory labels:
rafay.cni.name: cilium
rafay.type: cni
Alternative: Add Cilium from Application Catalog¶
Instead of manually uploading the Helm chart and values file, you can add the Cilium add-on directly from the Application Catalog. The catalog provides a pre-configured Cilium entry that can be deployed in a few clicks.
Select Cilium from the catalog and click Create Add-On. Choose the desired version and confirm.
The add-on will be created with the catalog-sourced Helm chart and values. Verify that the mandatory CNI labels (rafay.cni.name: cilium and rafay.type: cni) are present on the add-on.
Step 3: Create a Custom Blueprint¶
Create a custom blueprint and include the Cilium add-on created in the previous step (whether added manually or from the catalog).
Step 4: Provision the AKS Cluster¶
Create the AKS cluster using the cluster specification and ensure the following parameters are set:
- the custom blueprint
networkPlugin: none- the same
podCidrvalue used in the add-onvalues.yaml
The following example shows the AKS cluster specification and the highlighted parameters are the mandatory BYOCNI settings.
apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
name: shobhit-cni-pvt-proxy-310
project: defaultproject
spec:
blueprint: cilium-byo-cni
blueprintversion: v1.2
cloudprovider: shobhit-az-creds
clusterConfig:
apiVersion: rafay.io/v1alpha1
kind: aksClusterConfig
metadata:
name: shobhit-cni-pvt-proxy-310
spec:
managedCluster:
apiVersion: "2024-01-01"
location: centralindia
properties:
kubernetesVersion: 1.32.9
networkProfile:
dnsServiceIP: 10.0.0.10
loadBalancerSku: standard
networkPlugin: none # Mandatory for BYOCNI
podCidr: 10.244.0.0/16 # Must match Cilium values.yaml
serviceCidr: 10.0.0.0/16
type: Microsoft.ContainerService/managedClusters
resourceGroupName: shobhit-rg
type: aks
Ensure the podCidr value exactly matches the clusterPoolIPv4PodCIDRList value in the values.yaml file:
operator:
clusterPoolIPv4PodCIDRList:
- "10.244.0.0/16"
Verify Cluster Readiness¶
After provisioning completes, verify that the nodes are in Ready state and the Cilium pods are deployed in the kube-system namespace by running kubectl get pods -A.
Day 2 CNI Configuration Updates¶
Day 0 and Day 2 operations for CNI configuration updates are supported for AKS clusters that are created using BYOCNI.
For clusters that are provisioned with BYOCNI during Day 0:
- CNI configuration updates are supported during Day 2
- Updates can be applied using supported workflows such as blueprint updates
Important limitation
Existing AKS clusters that were not originally created with BYOCNI cannot be converted later into BYOCNI-based clusters.
Only clusters provisioned with BYOCNI during Day 0 support subsequent Day 2 CNI configuration updates.






