I was playing around a bit with Cilium to better understand how to setup L4 load balancing using the BGP Control Plane.
Cilium implements distributed load balancing for traffic between application containers and to external services and is able to fully replace components such as kube-proxy. The load balancing is implemented in eBPF using efficient hashtables allowing for almost unlimited scale.
I’m running Cilium on a OSS Kubernetes deployment (using kubeadm), in conjunction with a Ubiquiti EdgeRouter. In the post I will just have a look at the L4 capabilities, in this post I am exploring Ingress and Gateway API capabilities.
To get started with L4 load balacing of this, you will (of course) need to install Cilium first through the cilium CLI or using Helm. I’ve used Helm to install Cilium:
helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.14.5 --namespace kube-system
Because we already now we want to use BGP for load balancing (and I’m also using the kube-proxy replacement, this is required if you want to use the Gateway API implementation of Cilium), we want to inject some specific settings:
helm install cilium cilium/cilium --version 1.14.5 --namespace kube-system -f values.yaml -n kube-system
The values.yaml I am using is available here.
So, the first thing we have to define is a Cilium Load Balancer IP Pool. In Cilium,l LB IPAM (Load Balancer IP Address Management) is responsible for allocation and assigning of IPs to Service objects. Other features are responsible for the actual advertising of the (load balancing) IPs.
My initial LB IPAM definition is very basic:
apiVersion: "cilium.io/v2alpha1" kind: CiliumLoadBalancerIPPool metadata: name: "01-default-pool" spec: cidrs: - cidr: "172.16.101.0/24" serviceSelector: matchExpressions: - {key: "io.kubernetes.service.namespace", operator: NotIn, values: ["shop"]}
The file can be download here. This file defines a 172.16.101.0/24 default pool. The shop namespace is excluded from using this pool, because this particular namespace gets its ip addresses from another pool:
apiVersion: "cilium.io/v2alpha1" kind: CiliumLoadBalancerIPPool metadata: name: "02-pool-with-selector" spec: cidrs: - cidr: "172.16.102.0/24" serviceSelector: matchLabels: "io.kubernetes.service.namespace": "shop"
In this example a special purpose selector field is used (in this example a Kubernetes namespace), if the namespace is named shop subnet 172.16.102.0/24 should be used. You can also use matchLabels or matchExpressions to define customized selection criteria (as used in the initial LB IPAM configuration).
Now apply your LB IPAM configuration to the cluster.
Next step is to configure/apply a BGP configuration as defined here:
apiVersion: "cilium.io/v2alpha1" kind: CiliumBGPPeeringPolicy metadata: name: 01-bgp-peering-policy spec: nodeSelector: matchLabels: bgp-policy: lb virtualRouters: - localASN: 64512 exportPodCIDR: false neighbors: - peerAddress: '192.168.101.1/32' peerASN: 64512 serviceSelector: matchExpressions: - {key: somekey, operator: NotIn, values: ['never-used-value']}
This file defines the BGP configuration how to connect to the BGP peer, in this example my Ubiquiti Edge device. The serviceSelector in this example is configured in such a way to it will announce all the the services to my BGP peer, the YAML is available for download here. You can of course implement more advanced configurations to announce different services to the different BGP peers.
You need to label your Kubernetes node that will use the BGP policy
kubectl label nodes worker01 bgp-policy=lb
Now, at the Ubiquiti side you have to tell your router to peer with the Kubernetes/Cilium environemt. Login to your Ubiquiti router and:
configure
To enter configuration mode. Now configure BGP:
set protocols bgp 64512 parameters router-id 192.168.101.1 set protocols bgp 64512 neighbor 192.168.101.201 remote-as 64512 set protocols bgp 64512 neighbor 192.168.101.202 remote-as 64512 set protocols bgp 64512 neighbor 192.168.101.203 remote-as 64512
Now commit and save:
commit; save
Now check if the BGP peering is configured and active.
show ip bgp summary
And at the Kubernetes side:
cilium bgp peers
Let’s now deploy a Kubernetes app and see if our LB ip address is advertised. I’m using the microservices-demo coming from Google for this example.
kubectl apply -f ./release/kubernetes-manifests-LB.yaml -n shop
Let’s check the service that’s being advertised:
kubectl get service | grep frontend-external
At the Ubiquiti side we can check if this IP address (172.16.102.229) is advertised:
show ip bgp
My router knows about this IP address, and the webshop is available on the network:
That’s it, I hope this was helpful!