PV, PVC and SC resources of container scheduling system K8s

We talked about adding storage volumes to Pod in k8s earlier. Please refer to: https://www.cnblogs.com/qiuhom-1874/p/14180752.html ; Today, let's talk about persistent storage volumes;

The basic use of volume requires our users to manually pass different parameters to different types of storage interfaces, so as to map the external storage to a volume object on k8s, so that the pod can normally mount the corresponding storage volume and the container in the corresponding pod can be used normally; The premise of using this method is that the user must understand the corresponding storage system, the corresponding type of storage interface and relevant parameters; This makes it a little complicated for users to use storage volumes on k8s; In order to simplify this process, pv and pvc resources are used on k8s to hide the corresponding underlying storage interface. Users no longer care about the underlying storage system interface when using storage volumes; No matter what type of storage the bottom layer is, users only need to face a pvc interface;

Relationship between PV, PVC, K8s clusters and pod

Tip: when creating a pod, the user only needs to care about the PVC object of the corresponding namespace when using the storage volume; The corresponding pv is defined by the cluster management administrator; Back end storage is managed by a dedicated storage administrator; pv is a standard resource on k8s. Its full name is persistent volume, which translates into Chinese as persistent storage volume; Its main function is to map a logical unit in the back-end storage to pv resources on k8s; pv is a cluster level resource; Any namespace can be directly associated with a pv; The process of associating pv is called binding pv; A pv associated with the corresponding namespace needs to be defined using PVC resources; The full name of PVC is the abbreviation of persistent volume claim, which means persistent storage volume application; To create a PVC under a namespace is to bind the corresponding namespace to a pv on the cluster; Once a namespace is bound with a pv, the corresponding pv will change from the available state to the bond state, and other namespaces will no longer be used. Only when the corresponding pv is in the available state can it be normally bound by other namespaces; In short, the relationship between PVC and pv is one-to-one, and a pv can only correspond to one PVC; Whether multiple pods in the same namespace can use one PVC at the same time depends on whether the pv allows multiple reads and writes, and whether the corresponding pv supports multiple reads and writes depends on the back-end storage system; Different types of storage systems have different access modes. There are three access modes: single read-write (RWO), multiple read-write (RWX) and multiple read-only (ROX);

Example: pv resource creation

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@master01 ~]# cat pv-v1-demo.yaml apiVersion: v1 kind: PersistentVolume metadata:   name: nfs-pv-v1   labels:     storsystem: nfs-v1     rel: stable spec:   capacity:     storage: 1Gi   volumeMode: Filesystem   accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]   persistentVolumeReclaimPolicy: Retain   mountOptions:   - hard   - nfsvers=4.1   nfs:     path: /data/v1     server: 192.168.0.99 [root@master01 ~]#

Tip: PV is the standard resource of k8s. Its group version is V1 and its type is PersistentVolume; The spec.capacity.storage field is used to describe the storage capacity of PV; volumeMode is used to describe the storage volume type interface provided by the corresponding storage system. Generally, there are two types of storage volume type interfaces: file system interface and block device interface; accessModes is used to describe the access mode of PV; The presentvolumereclaimpolicy field is used to describe the storage volume recycling policy. There are three persistent volume recycling policies. One is Delete, which means that after pvc is deleted, the corresponding PV is also deleted; The second is Recycle, which means that after pvc is deleted, the data corresponding to PV will also be deleted; The third is Retain, which means that after pvc is deleted, PV is sealed, that is, PV is also there, and the corresponding data is also there; The mountOptions field is used to specify mount options; NFS means that the back-end storage is NFS. For different types of storage, the corresponding parameters to be transmitted are different. For NFS storage, we only need to specify its NFS server address and the corresponding shared file path; The above configuration means that the / data/v1 directory on NFS is mapped to PV on k8s, and the name of the corresponding PV is nfs-pv-v1; It should be noted that when creating PV, the corresponding back-end storage should be prepared in advance;

Application configuration list

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@master01 ~]# kubectl apply -f pv-v1-demo.yaml persistentvolume/nfs-pv-v1 created [root@master01 ~]# kubectl get pv NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE nfs-pv-v1   1Gi        RWO,ROX,RWX    Retain           Available                                   4s [root@master01 ~]# kubectl describe pv nfs-pv-v1 Name:            nfs-pv-v1 Labels:          rel=stable                  storsystem=nfs-v1 Annotations:     <none> Finalizers:      [kubernetes.io/pv-protection] StorageClass:    Status:          Available Claim:           Reclaim Policy:  Retain Access Modes:    RWO,ROX,RWX VolumeMode:      Filesystem Capacity:        1Gi Node Affinity:   <none> Message:         Source:     Type:      NFS (an NFS mount that lasts the lifetime of a pod)     Server:    192.168.0.99     Path:      /data/v1     ReadOnly:  false Events:        <none> [root@master01 ~]#

Tip: from the details of pv, you can see that the current status of pv is available, the back-end storage of pv is nfs, the corresponding ip address is 192.168.0.99, and the logical unit of the back-end storage of current pv is / data/v1;

Example: creating pvc

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@master01 ~]# cat pvc-v1-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: pvc-nfs-pv-v1   namespace: default   labels:     storsystem: nfs-v1 spec:   accessModes:     - ReadWriteMany   volumeMode: Filesystem   resources:     requests:       storage: 500Mi   selector:     matchLabels:       storsystem: nfs-v1       rel: stable [root@master01 ~]#

Note: pvc is also a standard resource on k8s. The corresponding group version is v1 and the type is PersistentVolumeClaim; The spec.accessModes field is used to specify the access mode of pvc. Generally, this mode is included by the accessModes of pv, that is, the access mode of pvc must be a subset of pv, that is, the access mode equal to or less than pv; resources is used to describe the storage space limit of the corresponding pvc, requests is used to describe the minimum capacity limit of the corresponding pvc, and limits is used to describe the maximum capacity limit; The selector is used to define the tag selector, which is mainly used to filter PVS that meet the corresponding tags; If the tag selector is not defined, it will match an optimal pv for association through its capacity limit and access mode in all available PVS;

Application configuration list

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [root@master01 ~]# kubectl apply -f pvc-v1-demo.yaml persistentvolumeclaim/pvc-nfs-pv-v1 created [root@master01 ~]# kubectl get pvc NAME            STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE pvc-nfs-pv-v1   Bound    nfs-pv-v1   1Gi        RWO,ROX,RWX                   8s [root@master01 ~]# kubectl describe pvc pvc-nfs-pv-v1 Name:          pvc-nfs-pv-v1 Namespace:     default StorageClass:  Status:        Bound Volume:        nfs-pv-v1 Labels:        storsystem=nfs-v1 Annotations:   pv.kubernetes.io/bind-completed: yes                pv.kubernetes.io/bound-by-controller: yes Finalizers:    [kubernetes.io/pvc-protection] Capacity:      1Gi Access Modes:  RWO,ROX,RWX VolumeMode:    Filesystem Used By:       <none> Events:        <none> [root@master01 ~]# kubectl get pv NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE nfs-pv-v1   1Gi        RWO,ROX,RWX    Retain           Bound    default/pvc-nfs-pv-v1                           19m [root@master01 ~]#

Tip: the size of pvc displayed here is the maximum capacity of pvc. By default, the maximum capacity is the maximum capacity of pvc; It can be seen from the above display that after the corresponding pv is bound by pvc, its status becomes bound;

Example: create a pod associated pvc and mount pvc in its pod container

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@master01 ~]# cat redis-demo.yaml apiVersion: v1 kind: Pod metadata:   name: redis-demo   labels:     app: redis spec:   containers:   - name: redis     image: redis:alpine     volumeMounts:     - mountPath: /data       name: redis-data   volumes:   - name: redis-data     persistentVolumeClaim:       claimName: pvc-nfs-pv-v1 [root@master01 ~]#

Tip: to associate pvc in pod, you only need to specify the backend storage type as persistentVolumeClaim, and then specify the corresponding pvc name;

Application resource list

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 [root@master01 ~]# kubectl apply -f redis-demo.yaml pod/redis-demo created [root@master01 ~]# kubectl get pod NAME         READY   STATUS              RESTARTS   AGE redis-demo   0/1     ContainerCreating   0          7s [root@master01 ~]# kubectl get pod NAME         READY   STATUS    RESTARTS   AGE redis-demo   1/1     Running   0          27s [root@master01 ~]# kubectl describe pod redis-demo Name:         redis-demo Namespace:    default Priority:     0 Node:         node03.k8s.org/192.168.0.46 Start Time:   Fri, 25 Dec 2020 21:55:41 +0800 Labels:       app=redis Annotations:  <none> Status:       Running IP:           10.244.3.105 IPs:   IP:  10.244.3.105 Containers:   redis:     Container ID:   docker://8e8965f52fd0144f8d6ce68185209114163a42f8437d7d845d431614f3d6dd05     Image:          redis:alpine     Image ID:       docker-pullable://redis@sha256:68d4030e07912c418332ba6fdab4ac69f0293d9b1daaed4f1f77bdeb0a5eb048     Port:           <none>     Host Port:      <none>     State:          Running       Started:      Fri, 25 Dec 2020 21:55:48 +0800     Ready:          True     Restart Count:  0     Environment:    <none>     Mounts:       /data from redis-data (rw)       /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions:   Type              Status   Initialized       True   Ready             True   ContainersReady   True   PodScheduled      True Volumes:   redis-data:     Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)     ClaimName:  pvc-nfs-pv-v1     ReadOnly:   false   default-token-xvd4c:     Type:        Secret (a volume populated by a Secret)     SecretName:  default-token-xvd4c     Optional:    false QoS Class:       BestEffort Node-Selectors:  <none> Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events:   Type    Reason     Age   From               Message   ----    ------     ----  ----               -------   Normal  Scheduled  37s   default-scheduler  Successfully assigned default/redis-demo to node03.k8s.org   Normal  Pulling    36s   kubelet            Pulling image "redis:alpine"   Normal  Pulled     30s   kubelet            Successfully pulled image "redis:alpine" in 5.284107704s   Normal  Created    30s   kubelet            Created container redis   Normal  Started    30s   kubelet            Started container redis [root@master01 ~]#

Prompt: you can see that the corresponding pod has been running normally; From the details, you can see that the volume type used by the corresponding pod is PersistentVolumeClaim, and the corresponding name is pvc-nfs-pv-v1; The corresponding container mounts the corresponding storage volume in a read-write manner;

Test: generate data on the redis demo to see if it can be saved to the nfs server normally?

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@master01 ~]# kubectl get pod NAME         READY   STATUS    RESTARTS   AGE redis-demo   1/1     Running   0          5m28s [root@master01 ~]# kubectl exec -it redis-demo -- /bin/sh /data # redis-cli 127.0.0.1:6379> set mykey "this is test key " OK 127.0.0.1:6379> get mykey "this is test key " 127.0.0.1:6379> BGSAVE Background saving started 127.0.0.1:6379> exit /data # ls dump.rdb /data #

Check whether the dump.rdb file is generated in the corresponding directory on the nfs server?

1 2 3 4 [root@docker_registry ~]# ll /data/v1 total 4 -rw-r--r-- 1 polkitd qiuhom 122 Dec 25 22:02 dump.rdb [root@docker_registry ~]#

Tip: you can see that the snapshot file generated on redis has a corresponding file on the nfs server;

Test: delete the pod to see if the corresponding file is still there?

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@master01 ~]# kubectl delete -f redis-demo.yaml pod "redis-demo" deleted [root@master01 ~]# kubectl get pods No resources found in default namespace. [root@master01 ~]# ssh 192.168.0.99 The authenticity of host '192.168.0.99 (192.168.0.99)' can't be established. ECDSA key fingerprint is SHA256:hQoossQnTJMXB0+DxJdTt6DMHuPFLDd5084tHyJ7920. ECDSA key fingerprint is MD5:ef:61:b6:ee:76:46:9d:0e:38:b6:b5:dd:11:66:23:26. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.99' (ECDSA) to the list of known hosts. root@192.168.0.99's password: Last login: Fri Dec 25 20:13:05 2020 from 192.168.0.232 [root@docker_registry ~]# ll /data/v1 total 4 -rw-r--r-- 1 polkitd qiuhom 122 Dec 25 22:05 dump.rdb [root@docker_registry ~]# exit logout Connection to 192.168.0.99 closed. [root@master01 ~]#

Tip: you can see that the snapshot file corresponding to the deleted pod still exists on the nfs server;

Bind the node and re create a new pod to see if the corresponding can automatically apply the data in the snapshot?

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root@master01 ~]# cat redis-demo.yaml apiVersion: v1 kind: Pod metadata:   name: redis-demo   labels:     app: redis spec:   nodeName: node01.k8s.org   containers:   - name: redis     image: redis:alpine     volumeMounts:     - mountPath: /data       name: redis-data   volumes:   - name: redis-data     persistentVolumeClaim:       claimName: pvc-nfs-pv-v1 [root@master01 ~]# kubectl apply -f redis-demo.yaml pod/redis-demo created [root@master01 ~]# kubectl get pod -o wide NAME         READY   STATUS              RESTARTS   AGE   IP       NODE             NOMINATED NODE   READINESS GATES redis-demo   0/1     ContainerCreating   0          8s    <none>   node01.k8s.org   <none>           <none> [root@master01 ~]# kubectl get pod -o wide NAME         READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES redis-demo   1/1     Running   0          21s   10.244.1.88   node01.k8s.org   <none>           <none> [root@master01 ~]#

Tip: you can see that the newly created pod is scheduled to node01;

Enter the corresponding pod to see if the data in its snapshot file is applied? Can the corresponding key be applied to memory?

1 2 3 4 5 6 7 8 9 10 11 12 [root@master01 ~]# kubectl get pods NAME         READY   STATUS    RESTARTS   AGE redis-demo   1/1     Running   0          2m39s [root@master01 ~]# kubectl exec -it redis-demo -- /bin/sh /data # redis-cli 127.0.0.1:6379> get mykey "this is test key " 127.0.0.1:6379> exit /data # ls dump.rdb /data # exit [root@master01 ~]#

Tip: you can see that the newly created pod can normally read the snapshot file on nfs and apply it to memory;

Delete pvc to see if the corresponding pv is deleted?

Tip: you can see that the corresponding deletion operation is blocked without deleting the pod;

View pvc status

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@master01 ~]# kubectl delete pvc pvc-nfs-pv-v1 persistentvolumeclaim "pvc-nfs-pv-v1" deleted ^C [root@master01 ~]# kubectl get pvc NAME            STATUS        VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE pvc-nfs-pv-v1   Terminating   nfs-pv-v1   1Gi        RWO,ROX,RWX                   34m [root@master01 ~]# kubectl get pvc NAME            STATUS        VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE pvc-nfs-pv-v1   Terminating   nfs-pv-v1   1Gi        RWO,ROX,RWX                   34m [root@master01 ~]# kubectl get pvc NAME            STATUS        VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE pvc-nfs-pv-v1   Terminating   nfs-pv-v1   1Gi        RWO,ROX,RWX                   34m [root@master01 ~]# kubectl get pv NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE nfs-pv-v1   1Gi        RWO,ROX,RWX    Retain           Bound    default/pvc-nfs-pv-v1                           52m [root@master01 ~]#

Tip: you can see that the status of pvc has changed to terminating, but the corresponding pvc has not been deleted; The corresponding pv is still bound;

Delete the pod to see if the corresponding pvc will be deleted?

1 2 3 4 5 6 7 8 9 10 11 [root@master01 ~]# kubectl get pod NAME         READY   STATUS    RESTARTS   AGE redis-demo   1/1     Running   0          14m [root@master01 ~]# kubectl delete pod redis-demo pod "redis-demo" deleted [root@master01 ~]# kubectl get pvc No resources found in default namespace. [root@master01 ~]# kubectl get pv NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                   STORAGECLASS   REASON   AGE nfs-pv-v1   1Gi        RWO,ROX,RWX    Retain           Released   default/pvc-nfs-pv-v1                           54m [root@master01 ~]#

Tip: you can see that pvc will be deleted immediately after the corresponding pod is deleted; After the corresponding pvc is deleted, the status of the corresponding pv changes from bound to Released, indicating waiting for recycling; We use the Retain recycling strategy in the resource list, and we recycle pv and pvc manually;

Delete pv to see if the corresponding data will be deleted?

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@master01 ~]# kubectl get pv NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                   STORAGECLASS   REASON   AGE nfs-pv-v1   1Gi        RWO,ROX,RWX    Retain           Released   default/pvc-nfs-pv-v1                           57m [root@master01 ~]# kubectl delete pv nfs-pv-v1 persistentvolume "nfs-pv-v1" deleted [root@master01 ~]# kubectl get pv No resources found [root@master01 ~]# ssh 192.168.0.99 root@192.168.0.99's password: Last login: Fri Dec 25 22:05:53 2020 from 192.168.0.41 [root@docker_registry ~]# ll /data/v1 total 4 -rw-r--r-- 1 polkitd qiuhom 122 Dec 25 22:24 dump.rdb [root@docker_registry ~]# exit logout Connection to 192.168.0.99 closed. [root@master01 ~]#

Tip: you can see that pv is deleted and the corresponding snapshot file is not cleared;

The above is the usage of pv and pvc resources. Let's talk about sc resources again

SC is the abbreviation of StorageClass, indicating storage class; This resource is mainly used to provide an interface for the automatic supply of pv resources; The so-called automatic supply means that the user does not need to create pv manually, but when creating pvc, the corresponding pv will be automatically created and bound by the persistent volume controller; The premise of using SC resources is that the corresponding back-end storage must support the management interface of restful type interface, and pvc must specify the corresponding storage class name to reference SC; In short, SC resources are used to provide back-end storage with an interface for automatically creating PVs and associating them with corresponding PVS; As shown below

Tip: when using sc to dynamically create pv, the corresponding pvc must also belong to the corresponding sc; The above figure mainly describes that when users create pvc, after referencing the corresponding sc, the corresponding sc will call the management interface of the underlying storage system to create the corresponding pv and associate it with the corresponding pvc;

Example: creating sc resources

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:   name: slow provisioner: kubernetes.io/glusterfs parameters:   resturl: "http://127.0.0.1:8081"   clusterid: "630372ccdc720a92c681fb928f27b53f"   restauthenabled: "true"   restuser: "admin"   secretNamespace: "default"   secretName: "heketi-secret"   gidMin: "40000"   gidMax: "50000"   volumetype: "replicate:3"

Tip: the above is an example in the official document. When creating sc resources, the corresponding group is storage.k8s.io/v1 and the type is StorageClass; The provider field is used to describe the name of the corresponding supply interface; Parameters is used to define the parameters to be passed to the corresponding storage management interface;

Reference SC resource object in pvc resource

1 2 3 4 5 6 7 8 9 apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: foo-pvc   namespace: foo spec:   storageClassName: "slow"   volumeName: foo-pv   ...

Tip: when creating pvc, use the storageClassName field to specify the corresponding SC name;

Author: Linux-1874
The copyright of this article belongs to the author and the blog park. Reprint is welcome, but this statement must be retained without the consent of the author, and the original text connection must be given in an obvious position on the article page, otherwise the right to investigate legal responsibility is reserved

Tags: Kubernetes

Posted on Wed, 10 Nov 2021 18:46:49 -0500 by Skippy93