Skip to content

Using Cilium as a Kubernetes Load Balancer: A Powerful Alternative to MetalLB

In Kubernetes, exposing services of type LoadBalancer in on-prem or bare-metal environments typically requires a dedicated "Layer 2" or "BGP-based" software load balancer—such as MetalLB. While MetalLB has been the go-to solution for this use case, recent advances in Cilium, a powerful eBPF-based Kubernetes networking stack, offer a modern and more integrated alternative.

Cilium isn’t just a fast, scalable Container Network Interface (CNI). It also includes cilium-lb, a built-in eBPF-powered load balancer that can replace MetalLB with a more performant, secure, and cloud-native approach.

Cilium based k8s Load Balancer


Why Load Balancer for Kubernetes?

Kubernetes Services of the type LoadBalancer exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one. One of the tasks operated by such external load balancer is to assign an external IP address to the Kubernetes Service.

Why Load Balancer

Now that we have IP addresses assigned to our Service, we need to advertise them to the rest of the network so that external clients can reach the service. This is done by setting up BGP peering sessions between the Kubernetes nodes and Top-of-Rack (ToR) devices. This allows you to tell the rest of the network about the networks and IP addresses used by your pods and your services.

Why BGP Peering


What Makes Cilium Stand Out?

Cilium leverages eBPF (extended Berkeley Packet Filter) to execute network logic directly in the Linux kernel. This architecture enables deep programmability of packet flows without requiring kernel module changes or user-space proxies, yielding the following benefits:

  1. Lower latency and overhead
  2. Higher throughput
  3. Rich observability and policy enforcement
  4. Advanced routing and NAT support

These benefits carry over when Cilium is used to power service load balancing within your Kubernetes cluster.


How Cilium Implements Load Balancing

Cilium provides native load-balancing capabilities through its eBPF-based data path. When configured correctly, Cilium can:

  • Handle ClusterIP, NodePort, and LoadBalancer traffic natively
  • Use Direct Server Return (DSR) for efficient packet flow
  • Integrate with BGP Control Planes for advertising services externally
  • Avoid relying on iptables or kube-proxy

For LoadBalancer-type services specifically, Cilium integrates with external BGP routers or MetalLB-style Layer 2 neighbor advertisement to expose services on a pool of external IPs.

The key differentiator is how Cilium programs the data path at the kernel level for optimal packet handling.


What Is Cilium LB-IPAM?

Cilium LB-IPAM i.e. LoadBalancer IP Address Management is a feature introduced in Cilium to provide native Kubernetes LoadBalancer service support by:

  • Automatically allocating external IPs to services from pre-defined IP pools
  • Using eBPF to program the dataplane for fast and efficient load balancing
  • Optionally advertising these IPs via BGP to upstream routers (similar to MetalLB)

The result is a powerful and integrated load balancing mechanism, built directly into the Cilium agent, without relying on external controllers or daemons.


Cilium LB-IPAM compared to MetalLB

MetalLB, while effective, introduces separate daemons, protocol configuration, and scaling challenges. In contrast, Cilium LB-IPAM centralizes IP management, load balancing, and BGP control—all within the Cilium control plane and data plane—ensuring consistent observability, lower latency, and simplified operations. The table below helps compare Cilium LB-IPAM and MetalLB.

Feature MetalLB Cilium LB-IPAM
Architecture External controller Built-in to Cilium
IP Allocation Layer2 or BGP IP Pools via CRDs
Load Balancing Mechanism kube-proxy/IPTables eBPF-native, kube-proxy-free
Observability Basic Extensive via Hubble
Security Integration None Built-in Cilium policies
Scalability Moderate High (no IPTables bottlenecks)

Conclusion

If you are already using Cilium for CNI in your bare metal/VM based Kubernetes clusters, activating LB-IPAM is a logical next step that eliminates the need for MetalLB or external BGP controllers. Even if you’re starting fresh, Cilium provides a more scalable and modern approach to handling LoadBalancer services on bare-metal clusters.

As Kubernetes becomes central to enterprise infrastructure, solutions that combine performance, observability, and security—like Cilium—offer a clear edge. Cilium LB-IPAM isn’t just a MetalLB replacement; it’s a step forward in building smarter, more integrated Kubernetes networking.

In afollow on blog, we will walk you though how you can setup, configure and operate Cilium LB-IPAM on a Rafay MKS Kubernetes Cluster running in an on-premises datacenter.