PVC and PV
1: PVC and PV overview
1.1 what are pvc and pv
Persistent volume (PV) is a segment of network storage that has been configured by the administrator in the cluster. A resource in a cluster is like a node, which is a cluster resource. PV is a volume plug-in such as a volume, but has a lifecycle independent of any single pod using PV. This API object captures the implementation details of storage, that is, NFS, iSCSI 1 or cloud provider specific storage systems.
Persistent volume claim (PVC) is a user stored request. Usage logic of pvc: define a storage volume (the storage volume type is PVC) in the pod. When defining, directly specify the size. pvc must establish a relationship with the corresponding pv. pvc will apply for pv according to the definition, and pv is created from the storage space. pv and pvc are storage resources abstracted from kubernetes.
Although persistent volume claims allows users to use abstract storage resources, the common requirement is that users need to create PVS according to different requirements for different scenarios. At this time, the cluster administrator needs to provide PV with different requirements, not just the size and access mode of PV, but users do not need to understand the implementation details of these volumes. For such requirements, the storageclass resource can be used at this time.
PV is a resource in the cluster. PVc is a request for these resources and an index check for resources.
The interaction between PV and pvc follows this life cycle
- Provisioning - > binding - > Using - > releasing - > recycling
1.2 two pv supply modes
PV is provided in two ways: static and dynamic
- Static - > direct fixed storage space
- The Cluster Administrator creates some PVS. They carry details of the formal storage available to cluster users. They exist in the kubernetes API and can be used for consumption.
- Dynamic ----- > dynamically create space through storage classes
- When the static PVS created by the administrator do not match the user's pvc, the cluster may try to dynamically configure volumes for pvc. This configuration is based on StorageClasses: PVC must request a storage class, and the administrator must have created and configured the class for dynamic configuration. The declaration of this class is required to effectively disable dynamic configuration for itself
Summary
PV is to create a storage resource (logically existing) from the space of the storage device
-
Static: storage resources created by k8s administrator for k8s cluster (pod) can be created from remote NFS or distributed object storage system (pv storage space size and access mode)
-
Dynamic storageclass (storage class resource): it is used to dynamically and automatically create pv resources of pvc application for use by pod
The pod uses pvc -- request -------- PV resource -------- in the storage device
2: View how pv and pvc are defined
2.1 use explain to view the definition of pv
2.1.1 viewing the definition method of pv
kubectl explain pv #View how pv is defined FIELDS: apiVersion kind metadata spec
2.1.2 viewing pv defined specifications
[root@master ~]# kubectl explain pv.spec spec: nfs (Define storage type) path (Define mount volume path) server (Define server name) accessModes (Define access model,There are three access models,Exists as a list,That is, multiple access patterns can be defined) ReadwriteOnce (RWO) Single node read / write ReadonlyMany (ROX) Multi node read only ReadwriteMany (RWX) Multi node read / write capacity (definition PV Size of space) storage (Specify size)
2.2 use explain to view the definition of pvc
2.2.1 view the definition of pvc
kubectl explain pvc #View how pvc is defined KIND: PersistentVolumeClaim VERSION: v1 FIELDS: apiVersion: <string> kind <string> metadata <Object> spec <Object>
2.2.2 check the specification of pvc
kubectl explain pvc.spec #View pvc specifications spec: accessModes (Define access mode,Must be pv Subset of access patterns) resources (Define the size of the requested resource) requests: storage:
3: Configure nfs to use pv and pvc
3.1 configuring nfs storage
[root@nfs ~]# yum -y install nfs-utils rpcbind [root@nfs ~]# mkdir -p /data/volumes/v{1..5} [root@nfs ~]# ls -R /data/ [root@nfs ~]# chmod -R 777 /data/*
#Configure directory for nfs share [root@nfs ~]# for i in {1..5} do echo "/data/volumes/v$1 192.168.23.0/24(rw,no_root_squash,sync)" >> /etc/exports done #Write page content [root@nfs ~]# for i in {1..5} do echo "this is pv00$i" > /data/volumes/v$i/index.html done
[root@nfs ~]# systemctl start rpcbind [root@nfs ~]# systemctl start nfs [root@nfs ~]# exportfs -arv [root@nfs ~]# showmount -e
3.2 definition pv
Define five PVS, and define the mounting path and access mode. PVS are divided into sizes
[root@master ~]# vim pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: stor01 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 1Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: stor01 accessModes: - ReadWriteOnce capacity: storage: 2Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: stor01 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 2Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: stor01 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 4Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: stor01 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 5Gi
[root@master ~]# kubectl apply -f pv-demo.yaml [root@master ~]# kubectl get pv
3.3 definition of pvc
3.3.1 situation 1
The access mode, access mode and storage size (capacity column) of the pvc request are fully consistent
[root@master ~]# vim pod-vol-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi --- apiVersion: v1 kind: Pod metadata: name: pod-vo1-pvc namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html volumes: - name: html persistentVolumeClaim: claimName: mypvc
[root@master ~]# kubectl apply -f pod-vol-pvc.yaml persistentvolumeclaim/mypvc created pod/pod-vo1-pvc created [root@master ~]# kubectl get pods,pv -o wide [root@master ~]# curl 10.244.1.151 this is pv003
3.3.2 Case 2
If the access mode matches and the size does not match, the closest size will be selected among the PVs larger than the requested size
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc-test02 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi --- apiVersion: v1 kind: Pod metadata: name: pod-vo2-pvc namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html volumes: - name: html persistentVolumeClaim: claimName: mypvc-test02
[root@master ~]# kubectl apply -f pod-vol-pvc.yaml persistentvolumeclaim/mypvc-test02 created pod/pod-vo2-pvc created [root@master ~]# kubectl get pods,pv,pvc -o wide [root@master ~]# curl 10.244.2.117 this is pv004
3.3.3 situation 3
If the access mode does not match or the size is not satisfied (both valid), both pod and pvc are in pending state
[root@master ~]# vim pod-vol-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc-test03 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 7Gi --- apiVersion: v1 kind: Pod metadata: name: pod-vo3-pvc namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html volumes: - name: html persistentVolumeClaim: claimName: mypvc-test03
[root@master ~]# kubectl apply -f pod-vol-pvc.yaml persistentvolumeclaim/mypvc-test03 created pod/pod-vo3-pvc created [root@master ~]# kubectl get pods,pv,pvc -o wide [root@master ~]# kubectl get pods,pv,pvc -o wide [root@master ~]# kubectl describe pod pod-vo3-pvc
3.3.4 situation 4
Use the multi host read / write RWX (ReadWriteMany) mode to add the newly created pod to the existing pvc
[root@master ~]# vim pod-vol-pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-vo4-pvc namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html volumes: - name: html persistentVolumeClaim: claimName: mypvc-test02
[root@master ~]# kubectl apply -f pod-vol-pvc.yaml pod/pod-vo4-pvc created [root@master ~]# kubectl get pods,pv,pvc -o wide [root@master ~]# curl 10.244.1.152 this is pv004
3.3.4 summary of PVC binding and multi node reading and writing
When the type of pvc request, accessModes, and storage size do not exactly match the pv
-
When the accessModes type is the same
-
- Select storage to store pv larger than the requested,
- When more than one is greater than, the closest one will be selected.
-
When the types of accessModes do not match, or the storage size is smaller than the request
-
- pod and pvc will be in pnding state
Multi node read / write:
When creating a pod, the value of pod.spec.volumes.claimName uses the existing pvc name. It can be that pod uses the existing pvc to use pv
3.4 delete pvc binding
[root@master ~]# kubectl describe persistentvolumeclaims mypvc-test02 .... Mounted By: pod-vo2-pvc pod-vo4-pvc ..... #Delete all pod s using this pvc first [root@master ~]# kubectl delete pod pod-vo{2,4}-pvc pod "pod-vo2-pvc" deleted pod "pod-vo4-pvc" deleted #Delete pvc again [root@master ~]# kubectl delete persistentvolumeclaims mypvc-test02 persistentvolumeclaim "mypvc-test02" deleted #It is found that the pvc has indeed been deleted, but the corresponding pv is in the Released state. At this time, the pv cannot be bound by the new pvc [root@master ~]# kubectl get pods,pv,pvc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES persistentvolume/pv004 4Gi RWO,RWX Retain Released default/mypvc-test02 73m Filesystem
Use Edit to edit the PV resource online and delete the claimef paragraph. After saving, view it through the command, its status will automatically change to Available, and PV can be reused
[root@master ~]# kubectl edit persistentvolume pv004 ... #delete claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: mypvc-test02 namespace: default resourceVersion: "242922" uid: 95ef0c00-754e-4a8e-81c3-f8ee4d5f9824 .....
[root@master ~]# kubectl get pods,pv,pvc -o wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE persistentvolume/pv004 4Gi RWO,RWX Retain Available 81m Filesystem
<b
4: storageClass
4.1 why storageClass
There are problems in the use of pvc and pvc. When applying for storage space for pvc, there may not be ready-made pv to meet the requirements of pvc application. The reason why nfs can succeed in pvc is that we have done the specified requirements processing.
So how to handle when the storage space applied for by PVC does not necessarily have a PV that meets the PVC requirements??? Therefore, Kubernetes provides the administrator with a method (StorageClass) to describe the storage "class".
For example, divide a 1TB storage space in the storage system for Kubernetes to use. When users need a 10G PVC, they will immediately send a request through restful to create a 10G image in the storage space, and then define 10G PV in our cluster to supply it to the current PVC for mounting. Before that, our storage system must support restful interface, which is better than that For example, ceph distributed storage, while glusterfs needs a third-party interface to complete such requests.
4.2 example
kubectl explain storageclass #Storageclass is also the resource kind: storageclass on k8s VERSION: storage.k8s.io/v1 FIELDS: allowVolumeExpansion <boolean> allowedTopologies <[]object> apiversion <string> kind <string> metadata <object> mountOptions <[string> #Mount options parameters <map [string]string> #Parameters, depending on the allocator, can accept different parameters. For example, the value io1 of parameter type and the parameter iopsPerGB are specific to EBS PV. When the parameters are omitted, the default values are used. provisioner <string>-required- #Storage allocator, used to determine which volume plug-in is used to allocate pv. This field must refer to reclaimPolicy <string> #The recycling policy can be Delete or Retain. If the storageclass object is created without specifying the reclaimPolicy, it will default to Delete volumeBindingMode <string> #Binding mode of volume
storageclass Contains provisioner, parameters and reclaimPolicy field,When class Dynamic allocation required persistentvolume Will be used when. because storageclass A separate storage system is required,Not shown here. View definitions from other materials storageclass The method is as follows: kind: storageclass apiversion: storage.k8s.io/vl metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Retain mountOptions: - debug