Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Deploying a Highly Available Kubernetes 1.29.2 Cluster with Kubeadm

Tech 3

Cluster Architecture & Network Topology

Host Role Operating System Specifications IP Address
Control Plane Node 1 CentOS 7.9 4 vCPU / 8 GB RAM / 200 GB 192.168.1.201
Control Plane Node 2 CentOS 7.9 4 vCPU / 8 GB RAM / 200 GB 192.168.1.203
Control Plane Node 3 CentOS 7.9 4 vCPU / 8 GB RAM / 200 GB 192.168.1.205
Worker Node 1 CentOS 7.9 8 vCPU / 16 GB RAM / 200 GB 192.168.1.101
Worker Node 2 Rocky Linux 9.3 8 vCPU / 16 GB RAM / 200 GB 192.168.1.102
Worker Node 3 Rocky Linux 9.3 8 vCPU / 16 GB RAM / 200 GB 192.168.1.103
Virtual IP (VIP) N/A N/A 192.168.1.10

Subnet Allocation:

  • Host Network: 192.168.1.0/24
  • Service CIDR: 10.96.0.0/12
  • Pod CIDR: 10.244.0.0/16

Base OS Hardening & Kernel Tuning

Execute the following baseline configuration across all cluster nodes. Adjust interface names to match your hardware.

Host Identification & Resolution

HOSTS_FILE="/etc/hosts"
NODE_MAP="192.168.1.201 cp-01
192.168.1.203 cp-02
192.168.1.205 cp-03
192.168.1.101 worker-01
192.168.1.102 worker-02
192.168.1.103 worker-03"
echo -e "${NODE_MAP}" >> "${HOSTS_FILE}"

System Services & Security Policies

Disable interfering daemons and enforce required kernel modules:

systemctl disable --now firewalld NetworkManager
setenforce 0
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
swapoff -a
sed -i '/swap/d' /etc/fstab

Kernel Upgrade (CentOS 7 Specific)

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-lt -y
grub2-set-default 0
reboot

Network Modules & Forwarding Rules

Enable IPVS and bridge filtering required for Kubernetes routing:

modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack
echo -e "ip_vs\nip_vs_rr\nip_vs_wrr\nip_vs_sh\nnf_conntrack\noverlay\nbr_netfilter" > /etc/modules-load.d/k8s-net.conf
systemctl restart systemd-modules-load.service

cat <<EOF > /etc/sysctl.d/k8s-forwarding.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

Advanced System Limits

cat <<EOF >> /etc/sysctl.d/k8s-tuning.conf
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 16384
fs.inotify.max_user_watches = 524288
kernel.pid_max = 4194304
vm.max_map_count = 262144
EOF
sysctl --system

cat <<EOF >> /etc/security/limits.conf
* soft nofile 100000
* hard nofile 100000
* soft nproc 100000
* hard nproc 100000
EOF

Load Balancing & High Availability Layer

Deploy Nginx and Keepalived exclusively on control plane nodes.

Nginx Stream Proxy

Configure TCP load balancing on port 9443:

tee /etc/nginx/nginx.conf << 'NGINX_CFG'
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 1024;
}

stream {
    upstream k8s_api {
        least_conn;
        server 192.168.1.201:6443 weight=10 max_fails=3 fail_timeout=5s;
        server 192.168.1.203:6443 weight=10 max_fails=3 fail_timeout=5s;
        server 192.168.1.205:6443 weight=10 max_fails=3 fail_timeout=5s;
    }
    server {
        listen 9443;
        proxy_pass k8s_api;
        proxy_connect_timeout 5s;
        proxy_timeout 30s;
    }
}
NGINX_CFG

systemctl enable --now nginx

Keepalived VRRP Configuration

Install keepalived and create a health check script:

yum install -y keepalived ipvsadm
tee /etc/keepalived/check_nginx.sh << 'CHK_SCRIPT'
#!/usr/bin/env bash
if ! pgrep -x nginx > /dev/null; then
    systemctl start nginx
    sleep 2
    if ! pgrep -x nginx > /dev/null; then
        systemctl stop keepalived
    fi
fi
exit 0
CHK_SCRIPT
chmod +x /etc/keepalived/check_nginx.sh

Master Node (cp-01):

tee /etc/keepalived/keepalived.conf << 'MASTER_VRRP'
vrrp_script chk_nginx {
    script "/etc/keepalived/check_nginx.sh"
    interval 2
    weight -20
    fall 2
    rise 1
}

vrrp_instance VI_K8S {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication { auth_type PASS; auth_pass SecurePass123; }
    virtual_ipaddress { 192.168.1.10 }
    track_script { chk_nginx }
}
MASTER_VRRP

Backup Nodes (cp-02, cp-03): Change state to BACKUP, and adjust priority to 140 and 130 respectively. Anable the daemon across all masters: systemctl enable --now keepalived.

Container Runtime: Containerd 1.6.28

Configure the runtime across every cluster node.

Installation & Registry Mirrors

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io-1.6.28

containerd config default | tee /etc/containerd/config.toml > /dev/null
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sed -i 's/registry.k8s.io\/pause:3.8/registry.aliyuncs.com\/google_containers\/pause:3.9/' /etc/containerd/config.toml

Configure Registry Accelerators

MIRROR_BASE="/etc/containerd/certs.d"
mkdir -p "${MIRROR_BASE}/docker.io" "${MIRROR_BASE}/registry.k8s.io" "${MIRROR_BASE}/quay.io"

cat <<EOF > "${MIRROR_BASE}/docker.io/hosts.toml"
server = "https://docker.io"

[host."https://xk9ak4u9.mirror.aliyuncs.com"]
  capabilities = ["pull", "resolve"]

[host."https://docker.m.daocloud.io"]
  capabilities = ["pull", "resolve"]
EOF

cat <<EOF > "${MIRROR_BASE}/registry.k8s.io/hosts.toml"
server = "https://registry.k8s.io"

[host."https://k8s.m.daocloud.io"]
  capabilities = ["pull", "resolve"]
EOF

systemctl enable --now containerd

crictl Setup

tee /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
EOF

Kubernetes Control Plane Installation

Install kubeadm, kubelet, and kubectl version 1.29.2 on every node.

tee /etc/yum.repos.d/k8s.repo << 'K8S_REP'
[kubernetes-stable]
name=Kubernetes v1.29
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=0
K8S_REP

yum install -y kubelet-1.29.2 kubeadm-1.29.2 kubectl-1.29.2
systemctl enable --now kubelet

Cluster Initialization

Execute the following exclusively on the primary control plane node (cp-01).

Initialization Configuration

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.1.201"
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  taints:
    - effect: NoSchedule
      key: node-role.kubernetes.io/control-plane
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "v1.29.2"
controlPlaneEndpoint: "192.168.1.10:9443"
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
  dnsDomain: cluster.local
imageRepository: registry.aliyuncs.com/google_containers
etcd:
  local:
    dataDir: /var/lib/etcd
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

Bootstrapping Process

kubeadm init --config=kubeadm-init.yaml --upload-certs | tee /var/log/k8s-bootstrap.log

# Post-initialization credentials setup
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Network Overlay: Calico CNI v3.27.0

Deploy the pod network using the official manifests. Ensure the CIDR matches your cluster configuration.

CALICO_VER="v3.27.0"
wget https://raw.githubusercontent.com/projectcalico/calico/${CALICO_VER}/manifests/calico.yaml

Apply the required pod CIDR and network interface overrides:

sed -i 's#value: "192.168.0.0/16"#value: "10.244.0.0/16"#' calico.yaml
sed -i '/IP_AUTODETECTION_METHOD/!b;n;c\              value: "interface=ens33"' calico.yaml

kubectl apply -f calico.yaml

Validate DNS resolution:

kubectl run dns-check --image=busybox:1.28 --rm -it -- nslookup kubernetes.default.svc.cluster.local

Worker Node Registration

Generate join commands for the data plane nodes. Run the token and certificate upload on any initialized control plane node.

TOKEN=$(kubeadm token create --print-join-command)
CERT_KEY=$(kubeadm init phase upload-certs --upload-certs | grep -w "certificate key" | awk '{print $NF}')

echo "${TOKEN} --control-plane --certificate-key ${CERT_KEY}"

Execute the generated kubeadm join ... command on each worker or secondary control plane node to finalize the HA cluster topology.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.