Kubernetes Core Concepts and Administration
Understanding Kubernetes Architecture
Evolution of Application Deployment Models
Application deployment has evolved through three major phases:
- Traditional Deployment: Direct physical machine deployment in early internet era
- Pros: Simple, no additional technology required
- Cons: No resource boundaries, poor resource allocation, potential interference between programs
- Virtualization Deployment: Multiple virtual machines on one physical machine
- Pros: Isolated environments, some security
- Cons: Extra OS overhead, resource waste
- Containerized Deployment: Shared OS with containers
- Pros: Isolated filesystem, CPU, memory, process space per container
- Cons: Cros-cloud and cross-platform deployment capabilities
Containerization introduces management challenges like:
- Automatic replacement of failed containers
- Horizontal scaling during traffic spikes
These issues are addressed by container orchestration tools:
- Swarm: Docker's own tool
- Mesos: Apache resource management, requires Marathon
- Kubernetes: Google's open-source solution
Kubernetes Overview
Kubernetes is a container-based distributed architecture solution, an open version of Google's Borg system. It's a cluster of servers managing containerized applications automatically.
Key features:
- Self-healing: Instant container restart upon failure
- Scalability: Automatic container count adjustment
- Service Discovery: Automatic service location
- Load Balancing: Request distribution across containers
- Rollback: Quick version rollback capability
- Storage Orchestration: Automated storage volume creation
Core Components
A Kubernetes cluster consists of control plane and worker nodes.
Control Plane (Master):
- ApiServer: Single entry point for commands, authentication, authorization
- Scheduler: Resource scheduling to nodes
- ControllerManager: Maintains cluster state
- Etcd: Stores cluster resource information
Worker Nodes:
- Kubelet: Container lifecycle management via Docker
- KubeProxy: Internal service discovery and load balancing
- Docker: Node-level container operations
Core Concepts
- Master: Cluster control plane
- Node: Worker node running containers
- Pod: Smallest unit, one or more containers
- Controller: Manages pod lifecycle
- Service: Unified entry point for pods
- Label: Pod classification tags
- Namespace: Pod environment isolation
Cluster Setup
Prerequisites
Two main approaches:
- kubeadm: Official tool for quick cluster deployment
- Binary Package: Manual component installation
System Requirements
- CentOS 7.5+ x86_64
- 2GB RAM, 2 CPUs, 30GB disk
- Network connectivity between nodes
- Internet access for image pulling
- Disabled swap
Environment Preparation
Configure hostname resolution, time synchronization, disable firewalls, SELinux, and swap.
Install Docker and Kubernetes components:
# Install Docker
yum install -y docker-ce
systemctl start docker
systemctl enable docker
# Install Kubernetes components
yum install -y kubeadm kubelet kubectl
systemctl enable kubelet
Cluster Initialization
Initialize the master node:
kubeadm init \
--apiserver-advertise-address=192.168.90.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.17.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
Join worker nodes:
kubeadm join 192.168.0.100:6443 --token awk15p.t6bamck54w69u4s8 \
--discovery-token-ca-cert-hash sha256:a94fa09562466d32d29523ab6cff122186f1127599fa4dcd5fa0152694f17117
Network Plugin Installation
Install Flannel network plugin:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resource Management
Resource Types
Kubernetes abstracts everything into resources. Key types include:
- Cluster Level: nodes, namespaces
- Pod Resources: pods, replication controllers, replica sets, deployments
- Service Discovery: services, ingress
- Storage: volumes, persistent volumes
- Configuration: config maps, secrets
YAML Configuration
YAML syntax basics:
# Object definition
apiVersion: v1
kind: Pod
metadata:
name: example
labels:
app: web
spec:
containers:
- name: app
image: nginx:latest
ports:
- containerPort: 80
Resource Operations
Three management approaches:
- Command-based: Direct commands
- Configuration-based: Commands + files
- Declaration-based: Apply configuration files
Practical Implementation
Namespace Management
Create and manage namespaces:
# Create namespace
kubectl create namespace dev
# List namespaces
kubectl get namespaces
# Delete namespace
kubectl delete namespace dev
Pod Operations
Deploy pods through controllers:
# Deploy nginx pod
kubectl run nginx --image=nginx:latest --port=80 --namespace dev
# Check pod status
kubectl get pods -n dev
# Delete pod
kubectl delete pod nginx -n dev
Label Management
Use labels for resource categorization:
# Add label
kubectl label pod nginx version=1.0 -n dev
# Query by label
kubectl get pods -l version=1.0 -n dev
Deployment Controller
Manage pod replicas with deployments:
# Create deployment
kubectl create deployment nginx --image=nginx:latest --replicas=3 -n dev
# Scale deployment
kubectl scale deployment nginx --replicas=5 -n dev
# Update image
kubectl set image deployment nginx nginx=nginx:1.17.1 -n dev
Service Access
Expose pods externally:
# Create ClusterIP service
kubectl expose deployment nginx --name=nginx-service --type=ClusterIP --port=80 -n dev
# Create NodePort service
kubectl expose deployment nginx --name=nodeport-service --type=NodePort --port=80 -n dev
Pod Internals
Pod Structure
Each pod contains:
- Application containers
- Pause container (root container)
Pod Configuration
Complete pod specification:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app
image: nginx:latest
ports:
- containerPort: 80
resources:
limits:
cpu: "1"
memory: "512Mi"
requests:
cpu: "100m"
memory: "256Mi"
Lifecycle Management
Pod states: Pending, Running, Succeeded, Failed, Unknown
Init Containers
Run before main containers:
initContainers:
- name: wait-for-db
image: busybox:1.30
command: ['sh', '-c', 'until ping db.example.com -c 1; do echo waiting; sleep 2; done;']
Probes
Health checks:
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
Storage Solutions
Volume Types
- EmptyDir: Temporary directory
- HostPath: Host filesystem mount
- NFS: Network filesystem
- PersistentVolume: Long-term storage
Persistent Volumes
PV/PVC pattern:
# PV definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.5.6
path: /data/pv1
# PVC request
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Security and Access Control
Authentication Methods
- Basic Auth: Username/password
- Token Auth: Certificate-based
- TLS Certificates: Mutual certificate auth
Authorization
RBAC (Role-Based Access Control):
# Role definition
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev
name: pod-reader
rules:
- apiGroups: [""],
resources: ["pods"],
verbs: ["get", "watch", "list"]
# Role binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: dev
subjects:
- kind: User
name: developer
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Admission Control
Plugins that intercept requests:
- NamespaceLifecycle: Prevents creation in non-existent namespaces
- ResourceQuota: Enforces resource limits
- LimitRanger: Applies resource constraints
Dashboard Interface
Installation
Deploy Dashboard with NodePort:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
Create admin account:
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
Access via browser at https://node-ip:30009 using generated token.