Deploying Cilium on Kubernetes: A Comprehensive Setup Guide
This guide walks through the process of setting up Cilium as the CNI plugin on a Kubernetes cluster built with kubeadm. The environment uses Ubuntu 24.04, Kubernetes v1.30.2, and Containerd 1.7.18. The cluster conssits of one master node and three worker nodes.
Environment Preparation
Perform the following steps on all nodes.
Enable SSH and Allow Root Login
Configure Time Synchronization
apt install chrony
systemctl start chrony.service
Edit /etc/chrony/chrony.conf to point to a time server:
server <CHRONY_SERVER> iburst
Hostname Resolution
Disable Swap
swapoff -a
Disable ufw
ufw disable
ufw status
Installing Required Pakcages
Install Containerd.io
Add the Docker CE repository using Aliyun mirror:
apt -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update
apt-get install containerd.io
Configure Containerd.io
Generate default configuration:
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
Edit /etc/containerd/config.toml to apply the following settings.
Use SystemdCgroup
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Set Sandbox Image
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
Configure Image Mirrors
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
endpoint = ["https://registry.aliyuncs.com/google_containers"]
Configure Private Registry (Optional)
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.jnlikai.cc"]
endpoint = ["https://harbor.jnlikai.cc"]
Skip TLS Verification for Private Registry (Optional)
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.jnlikai.cc".tls]
insecure_skip_verify = true
Restart Containerd:
systemctl daemon-reload
systemctl restart containerd
Install nerdctl
Download the binary from GitHub, extract it, and place it in /usr/local/bin/.
Install kubelet, kubeadm, and kubectl
Set up the Kubernetes repository:
apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
Cluster Initialization
Initialize the Master Node
To skip kube-proxy installation (since Cilium will replace it), use the --skip-phases flag.
kubeadm init \
--control-plane-endpoint="kubeapi.jnlikai.cc" \
--kubernetes-version=v1.30.2 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--image-repository=registry.aliyuncs.com/google_containers \
--upload-certs \
--skip-phases=addon/kube-proxy
After initialization, copy the admin configuration:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Join Worker Nodes
Use the join command provided by the init output on each worker node.
Install Cilium
Install Cilium CLI
Download the Cilium CLI tool, extract it, and place the binary in /usr/local/bin/.
Deploy Cilium
List available versions:
cilium install --list-versions
Deploy Cilium with desired settings. Below are some common configurations.
VXLAN Mode Example
cilium install \
--set kubeProxyReplacement=strict \
--set ipam.mode=kubernetes \
--set routingMode=tunnel \
--set tunnelProtocol=vxlan \
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \
--set ipam.Operator.ClusterPoolIPv4MaskSize=24
Native Routing Mode Example
cilium install \
--set kubeProxyReplacement=strict \
--set ipam.mode=kubernetes \
--set routingMode=native \
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \
--set ipam.Operator.ClusterPoolIPv4MaskSize=24 \
--set ipv4NativeRoutingCIDR=10.244.0.0/16 \
--set autoDirectNodeRoutes=true
Native Routing with Ingress Controller
cilium install \
--set kubeProxyReplacement=strict \
--set ipam.mode=kubernetes \
--set routingMode=native \
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \
--set ipam.Operator.ClusterPoolIPv4MaskSize=24 \
--set ipv4NativeRoutingCIDR=10.244.0.0/16 \
--set autoDirectNodeRoutes=true \
--set ingressController.enabled=true \
--set ingressController.loadbalancerMode=shared
Verify Cilium Status
cilium status
Advanced Features
- Enable BPF masquerade:
--set bpf.masquerade=true - Set load balancer mode:
--set loadBalancer.mode=dsror--set loadBalancer.mode=hybrid - Enable DSR mode requires
--set autoDirectNodeRoutes=true - Enable legacy routing:
--set bpf.hostLegacyRouting=true
Enable Hubble and Hubble UI
Enable via Cilium CLI after deployment:
cilium hubble enable --ui
Or include these options during installation:
--set hubble.enabled="true" \
--set hubble.listenAddress=":4244" \
--set hubble.relay.enabled="true" \
--set hubble.ui.enabled="true"
You can expose Hubble UI externally using Cilium's Ingress to inspect cluster state via a browser.