Kubeadm-Based Kubernetes Cluster Setup and Configuration
Clear any existing container runtime installations:
sudo yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine docker-ce docker-ce-cli containerd.io
Install necessary dependencies and configure the repository:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 wget
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Deploy a specific Docker release and activate the service:
sudo yum install -y docker-ce-18.06.1.ce-3.el7
sudo systemctl enable docker --now
docker --version
Create the daemon configuration to utilize a registry proxy:
cat <<'JSONEOF' | sudo tee /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry-proxy.example.com"]
}
JSONEOF
sudo systemctl daemon-reload
sudo systemctl restart docker
Define the Aliyun repository for Kubernetes components:
cat <<'REPOEOF' | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
REPOEOF
Install the required kubeadm, kubelet, and kubectl packages:
sudo yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
sudo systemctl enable kubelet
The kubelet service will remain in a crashed state until the cluster is configured, which is expected.
To avoid pulling issues from the default registry, fetch the components using an alternative mirror and re-tag them. Create a script to automate this process:
#!/bin/bash
ALT_REPO=registry.cn-hangzhou.aliyuncs.com/google_containers
K8S_VER=v1.18.0
COMPONENTS=$(kubeadm config images list --kubernetes-version=${K8S_VER} | awk -F '/' '{print $2}')
for ITEM in ${COMPONENTS}; do
sudo docker pull ${ALT_REPO}/${ITEM}
sudo docker tag ${ALT_REPO}/${ITEM} k8s.gcr.io/${ITEM}
sudo docker rmi -f ${ALT_REPO}/${ITEM}
done
Execute the cluster initialization on the control plane:
sudo kubeadm init \
--apiserver-advertise-address=10.0.2.15 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.0 \
--service-cidr=10.100.0.0/12 \
--pod-network-cidr=10.244.0.0/16
If a previous initialization attempt failed and left residual configurations, reset the environment before retrying:
sudo kubeadm reset
After successful initialization, configure kubectl access for a non-root user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
If DNS resolution fails for the Flannel manifest URL, manually resolve the domain by adding an IP entry:
echo "185.199.108.133 raw.githubusercontent.com" | sudo tee -a /etc/hosts
Apply the Flannel network configuration:
kubectl apply -f kube-flannel.yml
Connect worker machines to the cluster using the join command provided at the end of the initialization output:
sudo kubeadm join 10.0.2.15:6443 --token <generated-token> --discovery-token-ca-cert-hash sha256:<generated-hash>
Verify the cluster component status:
kubectl get nodes
kubectl get pods -n kube-system