ConfigMap Management, Advanced Scheduling, Rollback, and Scaling of Pods in Kubernetes
ConfigMap is a Kubernetes resource object used to store application configuration data, decoupling configuration details from container images. This allows flexible management and updates to application configurations without modifying the image.
Key characteristics of ConfigMap:
- Configuration Separation: ConfigMap separates application configuration from images, enabling configuration changes without rebuilding the image.
- Flexible Data Types: Supports key-value pairs and entire configuration file contents.
- Data Injection: ConfigMap data can be injected into Pods via environment variables, command-line arguments, or volume mounts.
- Live Updates: When a ConfigMap is updated, if a Pod uses it as a volume, Kubernetes can automatically refresh the cofniguration inside the Pod (requires application support).
Creating a ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: app-configuration
data:
application.properties: |-
server.port=8080
logging.level=INFO
db-settings.yaml: |-
host: postgres-service
port: 5432
This YAML defines a ConfigMap named app-configuration containing two configuration files: application.properties and db-settings.yaml.
Example Pod Using ConfigMap
apiVersion: v1
kind: Pod
metadata:
name: application-pod
spec:
containers:
- name: app-server
image: nginx:latest
volumeMounts:
- name: cfg-vol
mountPath: /app/conf
envFrom:
- configMapRef:
name: app-configuration
volumes:
- name: cfg-vol
configMap:
name: app-configuration
This Pod mounts the app-configuration ConfigMap to /app/conf within the container and also injects its data as environment variables.
Limitations when using ConfigMap:
- The ConfigMap must be created before the Pod that references it.
- ConfigMaps are namespace-scoped; Pods can only reference ConfigMaps in the same namespace.
- Quota management for ConfigMaps is not yet implemented.
- Kubelet only supports ConfigMap usage for Pods managed by the API Server. Static Pods created on a node via
--manifest-urlor--configcannot reference ConfigMaps. - When a ConfigMap is mounted as a volume, it can only be mounted as a directory inside the container, not as a single file. The mounted directory will contain all items from the ConfigMap. If the directory previously contained other files, they will be obscured. To preserve other files, mount the ConfigMap to a temporary directory and use a startup script to copy or link the configuration files to the actual application directory.
Pod Scheduling Strategies
Kubernetes offers diverse scheduling strategies with the following primary features:
- Node Affinity/Anti-affinity: Schedule Pods onto nodes with specific labels or avoid certain nodes using
nodeAffinityandnodeAntiAffinity. - Topology Spread Constraints: Control Pod distribution across failure domains like zones for high availability.
- Resource Constraints: Schedule based on node resources like CPU, memory, disk, and GPU using
resources.requestsandresources.limits. - Taints and Tolerations: Allow Pods to tolerate node taints for precise scheduling control.
-
NodeSelector: Schedule Pods onto nodes with specific labels.
Example: Schedule a Pod requiring a GPU onto nodes labeled
accelerator=gpu.apiVersion: v1 kind: Pod metadata: name: tensorflow-pod spec: nodeSelector: accelerator: "gpu" containers: - name: tf-container image: tensorflow/tensorflow:latest-gpu -
Pod Affinity & Anti-Affinity: Control co-location or separation of Pods.
Example: Prefer scheduling a cache Pod onto the same node as its associated web server Pod.
apiVersion: v1 kind: Pod metadata: name: cache-pod spec: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - web-server topologyKey: kubernetes.io/hostname containers: - name: redis image: redis:alpine -
Tolerations & Taints: Allow Pods to schedule onto tainted nodes.
Example: A monitoring Pod that can tolerate the
node.kubernetes.io/unreachabletaint.apiVersion: v1 kind: Pod metadata: name: monitor-pod spec: tolerations: - key: "node.kubernetes.io/unreachable" operator: "Exists" effect: "NoExecute" containers: - name: monitoring-agent image: prom/node-exporter -
DaemonSet: Ensures a copy of a Pod runs on all (or selected) nodes.
Example: Deploy a log collection agent on every node.
apiVersion: apps/v1 kind: DaemonSet metadata: name: log-agent spec: selector: matchLabels: name: log-agent template: metadata: labels: name: log-agent spec: containers: - name: fluent-bit image: fluent/fluent-bit:latest -
Job: Runs a Pod to completion for a one-off task.
Example: Run a database migration job.
apiVersion: batch/v1 kind: Job metadata: name: db-migration spec: template: spec: containers: - name: migration-tool image: myorg/migrator:v1.0 restartPolicy: Never -
CronJob: Runs Jobs on a time-based schedule.
Example: Generate a daily report at 2 AM.
apiVersion: batch/v1 kind: CronJob metadata: name: daily-report spec: schedule: "0 2 * * *" jobTemplate: spec: template: spec: containers: - name: report-generator image: report-gen:latest command: ["/bin/bash", "/scripts/generate.sh"] volumeMounts: - name: output mountPath: /output volumes: - name: output persistentVolumeClaim: claimName: report-pvc restartPolicy: OnFailure
Pod Rollback
For Pods managed by controllers like Deployment or StatefulSet:
- Revision History: Kubernetes maintains a history of updates.
- Seamless Rollback: Quickly revert Pods to a previous stable version.
- Management Visibility: Use
kubectl rollout historyto inspect update history.
Rolling Back a Deployment
kubectl rollout undo deployment/web-app
This command reverts the web-app deployment to its previous revision.
Pod Scaling
Scaling capabilities allow manual or automatic adjustment of Pod replicas:
- Elastic Scaling: Automatically adjust replica count based on CPU/memory usage, custom metrics, or exteranl metrics using the Horizontal Pod Autoscaler (HPA).
- Manual Scaling: Quickly adjust replicas using
kubectl scale. - Rolling Update Strategy: During scaling, Kubernetes performs rolling updates to maintain service availability.
Scaling a Deployment
Increase replicas to 5:
kubectl scale deployment/web-app --replicas=5
Decrease replicas to 2:
kubectl scale deployment/web-app --replicas=2