Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Kubernetes Installation and Testing with Docker on CentOS

Tech May 12 2
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

Kubernetes Installation

Refer to the official Kubernetes doucmentation for adding nodes to the cluster.

Environment Configuration

swapoff -a
setenforce 0
rm -rf $HOME/.kube # if previously installed

Software Installation

# Install Docker
yum install docker -y

# Install kubeadm, kubectl
# Add repository:
cat << EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# Install specific versions of kubeadm, kubectl, kubelet
# List all versions: yum list available kubeadm --showduplicates
# For example, installing Kubernetes 1.23.0 requires matching versions
# Otherwise you may encounter errors like:
# this version of kubeadm only supports deploying clusters with the control plane version >= 1.27.0. Current version: v1.23.0
yum install kubeadm-1.23.0-0 kubectl-1.23.0-0 kubelet-1.23.0-0 -y

Image Downloading

# Use kubeadm to initialize the Master node
# Run the following command to check required Docker images and versions:
kubeadm config images list

# Generate a script with version information:
cat << EOF >download_images.sh
#!/bin/bash
set -e

KUBE_VERSION=v1.23.0
KUBE_PAUSE_VERSION=3.6
ETCD_VERSION=3.5.1-0
CORE_DNS_VERSION=v1.8.6

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(
    kube-proxy:${KUBE_VERSION}
    kube-scheduler:${KUBE_VERSION}
    kube-controller-manager:${KUBE_VERSION}
    kube-apiserver:${KUBE_VERSION}
    pause:${KUBE_PAUSE_VERSION}
    etcd:${ETCD_VERSION}
    coredns:${CORE_DNS_VERSION}
)

for image_name in ${images[@]} ; do
    docker pull ${ALIYUN_URL}/$image_name
    docker tag  ${ALIYUN_URL}/$image_name ${GCR_URL}/$image_name
    docker rmi ${ALIYUN_URL}/$image_name
done
# Additional tag
docker tag ${GCR_URL}/coredns:v1.8.6 ${GCR_URL}/coredns/coredns:v1.8.6
EOF

chmod +x download_images.sh
./download_images.sh

Cluster Initialization

Reset Environment

# If previously installed, reset the environment
kubeadm reset

Initialize Kubernetes

# View default configuration for initialization:
kubeadm config print init-defaults >init_config.yaml

# Initialize (you may modify the configuration obtained above, 
# such as Kubernetes version or IP settings)
kubeadm init --config=init_config.yaml

# Alternatively, use this direct command:
kubeadm init --kubernetes-version=1.23.0 --node-name=master-node

Error Analysis

This step often encounters errors. Analyze based on error messages. Adding –v=5 to kubeadm init can show detailed information. If the problem persists, check with journalctl -xeu kubelet.

Cgroup Driver Error
# error: Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

# Create configuration file k8s-config.yaml with the following content:
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.23.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: cgroupfs

# After kubeadm reset, re-initialize with:
kubeadm init --config=k8s-config.yaml

Configure Authorization

After successful initialization, you'll see instructions. Configure authorization based on those instructions or add worker nodes to the cluster.

Sample output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.20.20.114:6443 --token abc123.def456 \
	--discovery-token-ca-cert-hash sha256:dd5f58c9ad1113daf894c79a61cadd67ded2c89ee99611ebd4f7e50dc3d89658 

If you forget the token and sha256, retrieve them with:

# Check token
kubeadm token list
# Check sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

Configure Network Plugin

Refer to the Kubernetes documentation for network plugin options.

# Download Calico network plugin
curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml -O

# Apply the configuration
kubectl apply -f calico.yaml

Check Status

The previous step will download images, which may take some time.

# Check successful installation, default namespace is kube-system
kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5bd579bf9c-9j8nx   1/1     Running   0          3m3s
kube-system   calico-node-7b2cv                          1/1     Running   0          3m3s
kube-system   coredns-64897985d-lrsg4                    1/1     Running   0          15m
kube-system   coredns-64897985d-qkjdz                    1/1     Running   0          15m
kube-system   etcd-master                                1/1     Running   0          15m
kube-system   kube-apiserver-master                      1/1     Running   0          15m
kube-system   kube-controller-manager-master             1/1     Running   0          15m
kube-system   kube-proxy-hgktb                           1/1     Running   0          15m
kube-system   kube-scheduler-master                      1/1     Running   0          15m

# Check node status
kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   26m   v1.23.0

Simple Application Test

Since we only created a single node (master only), and Kubernetes by default doesn't run pods on the master node, we need to configure it to allow pod execution on the master.

# For a single node setup, allow pods to run on the master node
# Kubernetes default policy is to run pods on worker nodes only, not on master nodes.
# For development or single-node cluster deployment, use:
kubectl taint nodes --all node-role.kubernetes.io/master-

# To revert master to master-only status:
#kubectl taint node master node-role.kubernetes.io/master="":NoSchedule

Create Nginx Deployment

cat << EOF >nginx_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  labels:
    app: webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webserver
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.15.4
        ports:
        - containerPort: 80
EOF

Create Nginx Service

cat << EOF >nginx_service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-web-service
  labels:
    app: webserver
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30949
  selector:
    app: webserver
EOF

Apply Configuration

Based on the created YAML files, create deployment and service. Since no namespace is specified, they'll be created in the default namespace. For other namespaces, add the -n parameter.

kubectl apply -f nginx_deployment.yaml
kubectl apply -f nginx_service.yaml

Check Status

kubectl get svc -n default
kubectl get deployments
kubectl get pods

Example Results

[root@localhost ~]# kubectl get svc -n default
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP        2d3h
nginx-web-service   NodePort    10.97.200.51   <none>        80:30949/TCP   101s
[root@localhost ~]# kubectl get deployments -n default
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app   1/1     1            1           2m11s
[root@localhost ~]# kubectl get pods -n default
NAME                                READY   STATUS    RESTARTS   AGE
nginx-app-746ccc65d8-pwcqb   1/1     Running   0          2m13s
</none></none>

You can access the service using the nodePort (30949) from the nginx_service.yaml file:

[root@localhost ~]# curl localhost:30949



<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>


<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br></br>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>



Check Pod Logs

First get the pod name, then view logs:

[root@localhost ~]# kubectl get pods -n default
NAME                                READY   STATUS    RESTARTS   AGE
nginx-app-746ccc65d8-pwcqb   1/1     Running   0          8m12s
[root@localhost ~]# kubectl -n default logs -f nginx-app-746ccc65d8-pwcqb

Tags: Kubernetes

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.