Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Deploying GlusterFS Distributed Storage in Kubernetes Clusters

Tech 3

Deploying GlusterFS Distributed Storage in Kubernetes Clusters

This guide outlines the process of setting up GlusterFS as a distributed storage solution within a Kubernetes environment. The implementation involves configurign a three-node GlusterFS cluster and integrating it with Kubernetes through Persistent Volumes.

Prerequisites and Initial Setup

Time Synchronization (All Nodes) Ensure all cluster nodes maintain consistent time synchronization:

# Install NTP client
sudo yum install -y ntpdate

# Synchronize with time server
sudo /usr/sbin/ntpdate ntp6.aliyun.com

# Configure automatic synchronization every 3 minutes
echo "*/3 * * * * /usr/sbin/ntpdate ntp6.aliyun.com &> /dev/null" | crontab -

GlusterFS Installation and Configuraton

1. Repository Setup (All Nodes)

# Add GlusterFS repository
sudo yum install -y centos-release-gluster

# View available GlusterFS versions
sudo yum list --showduplicates glusterfs-server

2. GlusterFS Installation (All Nodes)

# Install GlusterFS server components
sudo yum install -y glusterfs-server-6.5

# Create storage directory
sudo mkdir -p /gfs1

# Start and enable GlusterFS service
sudo systemctl restart glusterd.service
sudo systemctl enable glusterd.service

3. Cluster Configuration (Primary Node Only)

# Add peer nodes to the cluster
gluster peer probe 192.168.3.223
gluster peer probe 192.168.3.224
gluster peer probe 192.168.3.225

# Verify peer status
gluster peer status

4. Volume Creation (Primary Node Only)

# Create a replicated volume across three nodes
gluster volume create storage-volume replica 3 transport tcp \
    192.168.3.223:/gfs1 \
    192.168.3.224:/gfs1 \
    192.168.3.225:/gfs1 force

# Start the volume
gluster volume start storage-volume

# Verify volume information
gluster volume info storage-volume

Client-Side Mount Configuration

1. Client Installation (All Nodes)

# Install GlusterFS client components
sudo yum install -y glusterfs-6.5 glusterfs-fuse-6.5

# Create mount directory
sudo mkdir -p /mnt/gluster

# Mount the GlusterFS volume
sudo mount -t glusterfs localhost:storage-volume /mnt/gluster

# Configure persistent mount
echo 'localhost:/storage-volume /mnt/gluster glusterfs _netdev,rw,acl 0 0' >> /etc/fstab

Kubernetes Integration

1. GlusterFS Service Endpoint Configuration (Control Plane Node) Create gluster-storage.yaml:

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-endpoints
  namespace: default
subsets:
- addresses:
  - ip: 192.168.3.223
  - ip: 192.168.3.224
  - ip: 192.168.3.225
  ports:
  - port: 49152
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: glusterfs-service
  namespace: default
spec:
  ports:
  - port: 49152
    protocol: TCP
    targetPort: 49152
  sessionAffinity: None
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfs-python-app-pv
  labels:
    storage-type: glusterfs
spec:
  storageClassName: glusterfs-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-endpoints"
    path: "storage-volume/app-data"
    readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-python-app-pvc
  namespace: default
spec:
  storageClassName: glusterfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Apply the configuration:

kubectl apply -f gluster-storage.yaml
kubectl get pv,pvc

2. Application Deployment with GlusterFS Storage Create python-application.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: python-web-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: python-web-app
  template:
    metadata:
      labels:
        app: python-web-app
    spec:
      containers:
      - name: python-web-app
        image: registry.example.com:30050/python-web:1.0
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: app-storage
          mountPath: /data/app
      volumes:
      - name: app-storage
        persistentVolumeClaim:
          claimName: glusterfs-python-app-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: python-web-service
  namespace: default
spec:
  selector:
    app: python-web-app
  ports:
  - port: 80
    targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: python-app-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: python-web-service
            port:
              number: 80

Deploy the application:

kubectl apply -f python-application.yaml

# Verify deployment
kubectl get pod,svc,ingress -o wide | grep python-web-app

# Check pod details
kubectl describe pod $(kubectl get pod | grep python-web-app | awk '{print $1}')

Key Configuration Notes

  1. Network Configuration: Ensure all nodes (192.168.3.223-225) have proper network connectivity and firewall rules allow GlusterFS traffic (default port 49152).

  2. Storage Path: The GlusterFS volume path storage-volume/app-data must exist or be created on the GlusterFS nodes.

  3. Replication: The configuration uses 3-way replication for high availability. Adjust based on your redundancy requirements.

  4. Storage Class: The glusterfs-storage storage class enables dynamic provisioning of GlusterFS volumes.

  5. Access Modes: ReadWriteMany allows multiple pods to simultaneous read from and write to the same volume.

This setup provides a robust, distributed storage solution for Kubernetes applications, offering high availability and shared storage capabilities across multiple pods and nodes.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.