Deploying Common Middleware in Kubernetes Using Helm
This section covers deploying common development middleware using Helm. Many vendors provide Helm charts for their components, which significantly reduces deployment complexity.
MySQL Master-Slave Cluster Deployment
- Add the Bitnami chart repository:
https://charts.bitnami.com/bitnami - Search for MySQL and deploy with custom configuration
- Verify the deployment status
MySQL High-Availability Cluster
Dual-Master Architecture with Keepalived
MySQL dual-master replication allows two MySQL instances to act as masters for each other. When both masters accept write operations simultaneously, data conflicts may occur. To prevent write conflicts, only one master accepts write operations at a time using Keepalived for VIP management.
Dual-master replication (also known as Master-Master Replication) works as follows:
- Bidirectional Replication: Each master receives write operations, records them in binary logs, and sends these events to the other master for replication.
- Data Synchronization: Each master applies binary log events from the other master, maintaining synchronized datasets.
- Conflict Resolution: Since both masters can accept writes, conflicts may occur when both receive updates to the same row. Resolution strategies include timestamp-based prioritization or custom logic.
Advantages:
- Redundancy: Each master has a complete dataset可以作为其他主服务器的冗余备份
- Load Balancing: Read and write operations can be distributed across masters
- Failover: Write operations can be redirected if one master fails
Considerations:
- Conflict handling requires careful configuration
- Network latency affects replication performance
- Application writes must be directed to only one master to maintain consistency
Implementation in Kubernetes
- Create a Kubernetes cluster with sufficient nodes
- Deploy Keepalived instances using Deployment or StatefulSet
- Configure Keepalived with VIP and health check targets
- Deploy MySQL instances (one as primary, one as secondary)
- Configure MySQL replication between instances
- Test and monitor the setup
Common Issues in Dual-Master Setup
When both masters receive write operations on the same row:
- Data Conflicts: Different modifications on the same row cause inconsistency
- Data Loss: Without conflict detection, one master's writes may be overwritten
- Consistency Issues: Async replication delay causes data divergence
Solutions:
- Implement conflict detection and resolution strategies
- Partition data so specific rows are written to only one master
- Handle conflicts at the application layer using optimistic or pessimistic locking
Transaction Loss Considerations
When failover occurs from master to slave, transaction loss may happen due to async replication:
- Uncommitted Transactions: Transactions not committed before failover are lost
- Replication Delay: Committed transactions may not reach the slave before failover
Mitigation:
- Use InnoDB storage engine for transaction support
- Configure semi-sycnhronous replication
- Monitor replication status regularly
Depolying RadonDB MySQL HA Cluster
RadonDB MySQL is an open-source, cloud-native HA solution based on MySQL. Using the Raft protocol, it provides fast failover without transaction loss.
# Deploy using kubectl via the official project
git clone https://github.com/radondb/radondb-mysql-kubernetes.git
cd radondb-mysql-kubernetes
kubectl apply -f config/samples/mysql_v1alpha1_mysqlcluster.yaml
Nacos Cluster Deployment
Quick Start with Official Project
git clone https://github.com/nacos-group/nacos-k8s.git
cd nacos-k8s
./quick-startup.sh
Helm Deployment
Add chart repositories:
helm repo add nacos https://ygqygq2.github.io/charts
helm repo update
Configure values.yaml:
mysql:
enabled: true
architecture: replication
auth:
rootPassword: "nacos"
database: "nacos"
username: "nacos"
password: "nacos"
replicationUser: "replicator"
replicationPassword: "replicator"
Deploy:
helm install nacos nacos/nacos -n nacos-system --create-namespace
Redis Cluster Deployment
Master-Slave with Sentinel
Add repository and search for Redis:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo redis
Key configuration parameters:
| Parameter | Description | Value |
|---|---|---|
| architecture | Redis architecture | replication |
| auth.enabled | Enable password authentication | true |
| auth.password | Redis password | your-password |
| master.count | Number of master instances | 1 |
| replica.replicaCount | Number of replicas | 3 |
| sentinel.enabled | Enable Sentinel | true |
Deploy:
helm install redis bitnami/redis -n middleware \
--set sentinel.enabled=true \
--set replica.replicaCount=3 \
--set auth.password=redis-pass
Redis Cluster Mode
helm install redis-cluster bitnami/redis-cluster -n middleware \
--set cluster.nodes=6 \
--set cluster.replicas=1 \
--set cluster.externalAccess.enabled=true
Retrieve the password:
kubectl get secret -n middleware redis-cluster -o jsonpath="{.data.redis-password}" | base64 -d
RedisInsight Client
Deploy RedisInsight for visual management:
helm install redisinsight bitnami/redisinsight -n middleware \
--set service.type=LoadBalancer
ZooKeeper Cluster
helm install zookeeper bitnami/zookeeper -n middleware \
--set replicaCount=3 \
--set auth.clientEnabled=false
Kafka Cluster Deployment
helm install kafka bitnami/kafka -n kafka \
--create-namespace \
--set replicaCount=3 \
--set externalAccess.enabled=true \
--set externalAccess.service.type=LoadBalancer \
--set zookeeper.enabled=true \
--set persistence.enabled=true \
--set logPersistence.enabled=true
Verify Kafka Deployment
Create a test client:
kubectl run kafka-test -n kafka --image=bitnami/kafka:3.1.0 --restart=Never -- sleep infinity
Test producer and consumer:
# Internal access
kubectl exec -it kafka-test -n kafka -- kafka-console-producer.sh \
--broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092 --topic test
kubectl exec -it kafka-test -n kafka -- kafka-console-consumer.sh \
--bootstrap-server kafka-0.kafka-headless.kafka.svc.cluster.local:9092 \
--topic test --from-beginning
# External access (use the LoadBalancer IP)
kafka-console-producer.sh --broker-list <external-ip>:9094 --topic test
kafka-console-consumer.sh --bootstrap-server <external-ip>:9094 --topic test --from-beginning
Kafka Management Tools
| Tool | Description |
|---|---|
| Know Streaming | Enterprise Kafka management, monitoring, and multi-active disaster recovery |
| Kafka Manager | Yahoo's Kafka cluster management with Web UI for topic/partition management |
| Kafdrop | Lightweight Kafka visualization with real-time message monitoring |
| Kafka Map | Visual representation of message flow across partitions |
Elasticsearch Cluster
helm install elasticsearch bitnami/elasticsearch -n elastic \
--set clusterName=es-cluster \
--set master.replicaCount=3 \
--set data.replicaCount=2 \
--set coordinating.replicaCount=2
Node Roles
- Coordinating Node: Receives requests, routes to data nodes, aggregates results
- Data Node: Stores index data, handles CRUD operations, manages shards
- Ingest Node: Preprocesses and transforms data before indexing
- Master Node: Manages cluster state, metadata, node coordination
Installing IK Analyzer
# Access the container
kubectl exec -it elasticsearch-coord-0 -n elastic -- /bin/bash
# Install IK analyzer
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v8.12.0/elasticsearch-analysis-ik-8.12.0.zip
RocketMQ Cluster
helm install rocketmq rocketmq-repo/rocketmq -n rocketmq \
--create-namespace \
--set broker.size.master=3 \
--set broker.size.replica=1 \
--set nameserver.replicaCount=3 \
--set dashboard.enabled=true
MinIO Cluster
helm install minio bitnami/minio -n storage \
--set replicaCount=4 \
--set persistence.enabled=true \
--set mode=distributed
SkyWalking Distributed Tracing
Using Released Versions
helm install skywalking oci://registry-1.docker.io/apache/skywalking-helm \
--version "4.3.0" \
-n skywalking \
--set oap.image.tag=9.2.0 \
--set oap.storageType=elasticsearch \
--set ui.image.tag=9.2.0
Using Development Version
git clone https://github.com/apache/skywalking-kubernetes
cd skywalking-kubernetes/chart
helm repo add elastic https://helm.elastic.co
helm dep up skywalking
Configure external Elasticsearch:
elasticsearch:
enabled: false
config:
host: your.elasticsearch.host
port:
http: 9200
user: "elastic"
password: "your-password"
Deploy:
helm install skywalking ./skywalking -n skywalking -f ./skywalking/values-my-es.yaml
MongoDB Deployment
helm install mongodb bitnami/mongodb -n mongodb \
--set replicaSet.enabled=true \
--set replicaSet.replicas=3 \
--set auth.username=mongoadmin \
--set auth.password=mongopass
Extract connection string:
kubectl get secret -n mongodb mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d