Provisioning a Kubernetes Cluster with Kubeadm: Control Plane and Worker Nodes
Provisioning a Kubernetes Cluster with Kubeadm: Control Plane and Worker Nodes
This workflow details the standardized procedure for initializing a Kubernetes environment. The process covers control plane bootstrapping, container asset synchronization, worker node integration, and baseline management interface configuration. All steps assume pre-configured hostnames, network interfaces, and container runtimes.
Control Plane Initialization
Prerequisite Packages
Configure the package repository and resolve dependencies for the core utilities. The following script demonstrates a structured approach to source declaration and installation:
# Define repository metadata
REPO_IDENTIFIER="kubernetes"
SOURCE_ENDPOINT="https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64"
TARGET_PACKAGES=("kubelet" "kubeadm" "kubectl" "kubernetes-cni")
# Generate repository configuration
cat > /etc/yum.repos.d/${REPO_IDENTIFIER}.repo << 'EOF'
[${REPO_IDENTIFIER}]
name=${REPO_IDENTIFIER}
baseurl=${SOURCE_ENDPOINT}
enabled=1
gpgcheck=0
EOF
# Install dependencies and activate the daemon
yum install -y ${TARGET_PACKAGES[*]}
systemctl daemon-reload
systemctl enable --now kubelet
Container Image Preparation
Cluster components rely on specific container images. In restricted network environments, manually fetching and retagging these assets is mandatory. The following routine automates the mirroring, tagging, and cleanup process:
# Retrieve standard component manifest
COMPONENT_IMAGES=$(kubeadm config images list)
# Iterate through the list to mirror and reassign tags
echo "$COMPONENT_IMAGES" | while read -r FULL_PATH; do
# Isolate CoreDNS due to distinct registry routing
[[ "$FULL_PATH" == *"coredns"* ]] && continue
docker pull "${FULL_PATH/k8s.gcr.io\/mirrorgooglecontainers/}"
MAPPED_TAG="${FULL_PATH/k8s.gcr.io\//k8s.gcr.io\/}"
LOCAL_IMAGE="$(echo "$FULL_PATH" | rev | cut -d: -f2- | rev)"
docker tag "$LOCAL_IMAGE" "${MAPPED_TAG%%:*}:${MAPPED_TAG##*:}"
docker rmi "$LOCAL_IMAGE"
done
# Process CoreDNS separately
docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi coredns/coredns:1.3.1
Validate the local cache using docker images before advancing.
Cluster Bootstrap
The control plane is instantiated via the bootstrap utility. Key configuration directives dictate networking topology, certificate generation, and workload distribution:
--apiserver-advertise-address: Binds the API server to a specific network interface.--pod-network-cidr: Defines the subnet allocation for workload pods.--service-cidr: Reserves IP ranges for internal cluster services.--image-repository: Redirects image pulls to an alternative registry.
A typical initialization sequence utilizes aggregated parameters:
BOOTSTRAP_FLAGS=(
"--kubernetes-version=v1.15.0"
"--apiserver-advertise-address=172.16.2.201"
"--pod-network-cidr=10.0.0.0/16"
"--service-cidr=11.0.0.0/12"
"--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers"
)
kubeadm init "${BOOTSTRAP_FLAGS[@]}"
Upon successful completion, the utility outputs a registration command for downstream machines. Securely archive this token and certificate hash for cross-node authentication:
kubeadm join 172.16.2.201:6443 \
--token jx82lw.8ephcufcot5j06v7 \
--discovery-token-ca-cert-hash sha256:180a8dfb45398cc6c3addd84a61c1bd4364297da1e91611c8c46a976dc12ff17
Note: Static Pods generated during bootstrapping are orchestrated exclusively by the local Kubelet instance located in /etc/kubernetes/manifests/. They operate outside the API server lifecycle controller. The bootstrap routine also applies dedicated taints to the control plane to prevent accidental workload scheduling.
Access Configuration
Grant administrative privileges to the command-line interface by exposing the generated credential file:
# For root contexts
export KUBECONFIG=/etc/kubernetes/admin.conf
# For unprivileged accounts
mkdir -p ~/.kube
cp -i /etc/kubernetes/admin.conf ~/.kube/config
chown $(id -u):$(id -g) ~/.kube/config
CNI Plugin Deployment
Kubernetes mandates a Container Network Interface (CNI) implementation to facilitate inter-pod communication. Applying the official Flannel manifest provisions the underlying overlay network:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Component Verification
Confirm that all control plane resources are operational before integrating workers:
kubectl get cs
kubectl get pods -n kube-system -o wide
All pods within the kube-system namespace must achieve a Running state.
Worker Node Integration
Agent Installation
Deploy the runtime agent on each target machine using the identical repository configuration:
yum install -y kubelet kubeadm
systemctl daemon-reload
systemctl enable --now kubelet
Image Synchronization
Execute the container preparation routine from the control plane section on every worker node to guarantee component compatibility.
Cluster Join Operation
Execute the previously archived join string to register the machine with the central controller:
kubeadm join 172.16.2.201:6443 \
--token jx82lw.8ephcufcot5j06v7 \
--discovery-token-ca-cert-hash sha256:180a8dfb45398cc6c3addd84a61c1bd4364297da1e91611c8c46a976dc12ff17
Expired tokens can be regenerated dynamically using kubeadm token create when required.
Credential Distribution
To administer the cluster remotely from worker nodes, duplicate the administrator configuration securely:
# Executed from the control plane
scp /etc/kubernetes/admin.conf operator@worker-node-ip:/etc/kubernetes/admin.conf
# Executed on each worker node
export KUBECONFIG=/etc/kubernetes/admin.conf
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
Node Status Validation
Validate cluster topology and node readiness:
kubectl get nodes
Management Interface Configuration
Deploying a graphical dashboard simplifies resource monitoring and troubleshooting. The following sequence handles manifest application, authorization binding, and secure access provisioning:
Dashboard Deployment
# Mirror required image locally if external connectivity is restricted
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
# Apply official deployment manifests
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Authorization Binding
Create a privileged service account linked to the cluster-admin role to bypass restricetd permissions:
cat > dashboard-admin-rbac.yaml << 'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
EOF
kubectl create -f dashboard-admin-rbac.yaml
Proxy Access & Authentication
Extract the authentication token and establish a secure tunnel:
DASHBOARD_SECRET=$(kubectl -n kube-system get secret | grep dashboard-admin | awk '{print $1}')
kubectl describe -n kube-system secret/$DASHBOARD_SECRET | grep token: | awk '{print $2}'
kubectl proxy
Navigate to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ in a web browser. Paste the extracted token into the login prompt to access the cluster topology view.