k8s uses glusterfs to realize dynamic persistent storage

brief introduction

This article introduces how to use glusterfs to provide k8s with the function of dynamic pv application. Glusterfs provides the underlying storage function, and heketi provides a restful api for glusterfs to facilitate the management of glusterfs. Three access modes of pv supporting k8s ReadWriteOnce, ReadOnlyMany, ReadWriteMany

Access mode is only a capability description, not a mandatory one. For the use of pv without pvc declaration, the storage provider should be responsible for the running errors during access. For example, if you set the access mode of pvc to ReadOnlyMany, the pod can still be written after mounting. If you need to be truly non writable, you need to specify the readOnly: true parameter to apply for pvc

install

Experimental Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV["LC_ALL"] = "en_US.UTF-8"

Vagrant.configure("2") do |config|
    (1..3).each do |i|
      config.vm.define "lab#{i}" do |node|
        node.vm.box = "centos-7.4-docker-17"
        node.ssh.insert_key = false
        node.vm.hostname = "lab#{i}"
        node.vm.network "private_network", ip: "11.11.11.11#{i}"
        node.vm.provision "shell",
          inline: "echo hello from node #{i}"
        node.vm.provider "virtualbox" do |v|
          v.cpus = 2
          v.customize ["modifyvm", :id, "--name", "lab#{i}", "--memory", "3096"]
          file_to_disk = "lab#{i}_vdb.vdi"
          unless File.exist?(file_to_disk)
            # 50GB
            v.customize ['createhd', '--filename', file_to_disk, '--size', 50 * 1024]
          end
          v.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', file_to_disk]
        end
      end
    end
end
Copy code

Environment configuration description

# To install glusterfs, each node needs to load the DM? Thin? Pool module in advance
modprobe dm_thin_pool

# Configure to turn on self loading
cat >/etc/modules-load.d/glusterfs.conf<<EOF
dm_thin_pool
EOF

# Install glusterfs fuse
yum install -y glusterfs-fuse
Copy code

Installing glusterfs and heketi

# Install heketi client
# https://github.com/heketi/heketi/releases
# Go to github to download the relevant version
wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
tar xf heketi-client-v7.0.0.linux.amd64.tar.gz
cp heketi-client/bin/heketi-cli /usr/local/bin

# View version
heketi-cli -v

# The following deployment steps are executed in the following directory
cd heketi-client/share/heketi/kubernetes

# Deploy glusterfs in k8s
kubectl create -f glusterfs-daemonset.json

# View node nodes
kubectl get nodes

# label the node providing storage
kubectl label node lab1 lab2 lab3 storagenode=glusterfs

# View glusterfs status
kubectl get pods -o wide

# Deploy heketi server 
# Configure permissions of heketi server
kubectl create -f heketi-service-account.json
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account

# Create cofig secret
kubectl create secret generic heketi-config-secret --from-file=./heketi.json

# Initialize deployment
kubectl create -f heketi-bootstrap.json

# View heketi bootstrap status
kubectl get pods -o wide
kubectl get svc

# Configure port forwarding heketi server
HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep deploy-heketi | awk '{print $1}')
kubectl port-forward $HEKETI_BOOTSTRAP_POD 58080:8080

# Test access
# Another terminal
curl http://localhost:58080/hello

# Configure glusterfs
# The hostnames/manage field must be consistent with the kubectl get node
# hostnames/storage specifies the storage network ip. This experiment uses the same ip as k8s cluster
cat >topology.json<<EOF
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab1"
              ],
              "storage": [
                "11.11.11.111"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab2"
              ],
              "storage": [
                "11.11.11.112"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab3"
              ],
              "storage": [
                "11.11.11.113"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        }
      ]
    }
  ]
}
EOF
export HEKETI_CLI_SERVER=http://localhost:58080
heketi-cli topology load --json=topology.json

# Using Heketi to create a volume for storing Heketi database
heketi-cli setup-openshift-heketi-storage
kubectl create -f heketi-storage.json

# View state
# When all job s are Completed, the status is Completed
# To perform the following steps
kubectl get pods
kubectl get job

# Delete related resources generated during deployment
kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"

# Deploy heketi server
kubectl create -f heketi-deployment.json

# View heketi server status
kubectl get pods -o wide
kubectl get svc

# View heketi status information
# Configure port forwarding heketi server
HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep heketi | awk '{print $1}')
kubectl port-forward $HEKETI_BOOTSTRAP_POD 58080:8080
export HEKETI_CLI_SERVER=http://localhost:58080
heketi-cli cluster list
heketi-cli volume list
Copy code

test

# Create StorageClass
# Because authentication is not turned on
# restuser restuserkey can be written at will
HEKETI_SERVER=$(kubectl get svc | grep heketi | head -1 | awk '{print $3}')
echo $HEKETI_SERVER
cat >storageclass-glusterfs.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://$HEKETI_SERVER:8080"
  restauthenabled: "false"
  restuser: "will"
  restuserkey: "will"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"
EOF
kubectl create -f storageclass-glusterfs.yaml

# See
kubectl get sc

# Create pvc test
cat >gluster-pvc-test.yaml<<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
 annotations:
   volume.beta.kubernetes.io/storage-class: gluster-heketi
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 5Gi
EOF
kubectl apply -f gluster-pvc-test.yaml
 
# See
kubectl get pvc
kubectl get pv
 
# Create nginx pod mount test
cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1
EOF
kubectl apply -f nginx-pod.yaml
 
# See
kubectl get pods -o wide
 
# Modify file content
kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo Hello World from GlusterFS!!! > /usr/share/nginx/html/index.html'
 
# Access test
POD_ID=$(kubectl get pods -o wide | grep nginx-pod1 | awk '{print $(NF-1)}')
curl http://$POD_ID
 
# Node node view file content
GLUSTERFS_POD=$(kubectl get pod | grep glusterfs | head -1 | awk '{print $1}')
kubectl exec -ti $GLUSTERFS_POD /bin/sh
mount | grep heketi
cat /var/lib/heketi/mounts/vg_56033aa8a9131e84faa61a6f4774d8c3/brick_1ac5f3a0730457cf3fcec6d881e132a2/brick/index.html
Copy code

This article turns to gold digging- k8s uses glusterfs to realize dynamic persistent storage

Tags: Web Server Nginx JSON Kubernetes github

Posted on Wed, 04 Dec 2019 14:08:36 -0500 by southofsomewhere