pv/pvc storage of kubernetes

Labels (space separated): kubernetes series

  • 1: PV/PVC storage of kubernetes

1: PV/PVC storage of kubernetes

1.1 pv

PersistentVolume (PV)

Is the storage set up by the administrator as part of the cluster. Just as a node is a resource in a cluster, PV is also a resource in a cluster. PV is
 Volume plug-ins, such as volume, but have a lifecycle independent of the Pod using PV. This API object contains the details of the storage implementation, that is, NFS
 iSCSI or cloud vendor specific storage systems

1.2 pvc

PersistentVolumeClaim (PVC)

Is a user stored request. It's similar to pod. Pod consumes node resources while PVC consumes PV resources. Pod can request a specific level of resources
 (CPU and memory). Declarations can request specific sizes and access patterns (for example, can be mounted in read / write once or read-only multiple modes)

1.3 static pv

The Cluster Administrator creates some PV S. They have details of the actual storage available to cluster users. They exist in the Kubernetes API and are available
 In consumption

1.4 dynamics

When none of the static PVS created by the administrator match the user's PersistentVolumeClaim, the cluster may attempt to dynamically create volumes for PVC. this
 Configuration is based on StorageClasses: PVC must request [storage class], and the administrator must create and configure the class for dynamic creation. Declare that
 Class' 'can effectively disable its dynamic configuration
 To enable storage level based dynamic storage configuration, the cluster administrator needs to enable DefaultStorageClass on the API server
 .  For example, use comma separated
 In the ordered value list, you can do this

1.5 binding

The control loop in the master monitors the new PVC, looks for matching PVS, if possible, and binds them together. If it is a new PVC dynamic
 When PV is deployed, the loop will always bind the PV to PVC. Otherwise, users will always get the storage they requested, but the capacity may exceed the requirements
 The number. Once PV and PVC are bound, the PersistentVolumeClaim binding is exclusive, no matter how they are bound. PVC follows
 PV binding is a one-to-one mapping

Protection of persistent volume declaration

The purpose of PVC protection is to ensure that the PVC being used by the pod will not be removed from the system, as data loss may result if it is removed
 When the PVC protection alpha function is enabled, if the user deletes the PVC in use by a pod, the PVC will not be deleted immediately. PVC
 Deletion will be delayed until PVC is no longer used by any pod

2.1 persistent volume type

PersistentVolume Type is implemented as a plug-in. Kubernetes The following plug-in types are currently supported:
GCEPersistentDisk AWSElasticBlockStore AzureFile AzureDisk FC (Fibre Channel)
FlexVolume Flocker NFS iSCSI RBD (Ceph Block Device) CephFS
Cinder (OpenStack block storage) Glusterfs VsphereVolume Quobyte Volumes
HostPath VMware Photon Portworx Volumes ScaleIO Volumes StorageOS

2.2 persistent volume demo code

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /tmp
    server: 172.17.0.2

2.3 PV access mode

PersistentVolume can be mounted to the host in any way supported by the resource provider. As shown in the following table, suppliers have different functions, each of which
 The access mode of PV will be set to the specific mode supported by the volume. For example, NFS can support multiple read / write clients, but a specific NFS PV might
 Export to the server as read-only. Each PV has its own set of access patterns to describe specific functions
ReadWriteOnce - the volume can be mounted in read / write mode by a single node
 ReadOnlyMany - the volume can be mounted in read-only mode by multiple nodes
 ReadWriteMany - the volume can be mounted in read / write mode by multiple nodes

On the command line, the access mode is abbreviated to:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany

2.4 recycling strategy

Retain - Manual recycle
 Recycle - Basic erase (rm -rf /thevolume / *)
Delete - associated storage assets (such as AWS EBS, GCE PD, Azure Disk, and OpenStack Cinder volumes)
place it on clipboard
 Currently, only NFS and HostPath support the recycle policy. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion policies

2.5 state

A volume can be in one of the following states:
Available - a free resource has not been bound by any claims
 Bound - the volume has been declared bound
 Released - the claim has been removed, but the resource has not been redeclared by the cluster
 Failed - Auto recycle of the volume failed
 The command line displays the name of the PVC bound to the PV

2.6 persistent nfs deployment configuration

login node04.flyfish To configure nfs service
----

yum install -y nfs* nfs-utils rpcbind

mkdir /nfs

chmod 777 /nfs

chown nfsnobody /nfs

cat /etc/exports

/nfs *(rw,no_root_squash,no_all_squash,sync)

systemctl start rpcbind

systemctl start nfs

k8s All node installations nfs Client's Mount Package

login : node01.flyfish

yum install rpcbind nfs-utils 

mkdir /test
showmount -e 192.168.100.14 

mount -t nfs 192.168.100.14:/nfs /test

To configure pv
vim pv.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
   path: /nfs
   server: 192.168.100.14
---

kubectl apply -f pv.yaml
kubectl get pv 

Create more nfs pv 
----
login: node04.flyfish

mkdir /nfs1 /nfs2 /nfs3

chmod 777 /nfs1 /nfs2 /nfs3

vim /etc/exportfs
---
/nfs     *(rw,no_root_squash,no_all_squash,sync)
/nfs1    *(rw,no_root_squash,no_all_squash,sync)
/nfs2    *(rw,no_root_squash,no_all_squash,sync)
/nfs3    *(rw,no_root_squash,no_all_squash,sync)
---

service rpcbind restart 
service nfs restart 
exportfs -V 


vim pv1.yaml
------
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
   path: /nfs1
   server: 192.168.100.14
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv3
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  nfs:
   path: /nfs2
   server: 192.168.100.14
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv4
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
   path: /nfs3
   server: 192.168.100.14
-----

kubectl apply -f pv1.yaml

kubectl get pv 

Establish PVC application
------
vim pvc.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: wangyanglinux/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
          - name: www
            mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 1Gi
---
kubectl apply -f pvc.yaml
---------
kubectl get pod 
kubectl get pv 
kubectl get pvc

It can be seen from the above that the number of replicas required for the creation of this pvc pod is 3, and only one pod is created. If you want to create a pod, you must meet the two conditions of accessmodes: ["readwriteonce"] and storageClassName: "nfs". However, only nfsv1 meets the conditions, so only one pod can be created 

kubectl get pod  

kubectl describe pod web-1

If you want to create successfully, you must change the type and control permission of pv
 First, delete nfspv3 and nfspv4

kubectl delete pv nfspv3

kubectl delete pv nfspv4

vim pv2.yaml
------
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv3
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
   path: /nfs2
   server: 192.168.100.14
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv4
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
   path: /nfs3
   server: 192.168.100.14
------
kubectl apply -f pv2.yaml

kubectl get pv 

kubectl get pod 

kubectl get pv 

kubectl get pvc 

kubectl describe pv nfspv1
kubectl describe pv nfspv2
kubectl descirbe pv nfspv3
kubectl descirbe pv nfspv4

kubectl get pv 
kubectl get pod -o wide  

login node04.flyfish

cd /nfs

echo aaaaa > index.html

cd /nfs2 

echo bbbb >> index.html

cd /nfs3 

echo "web2 web2" >> index.html 
curl 10.244.1.6
curl 10.244.2.8
curl 10.244.2.7

kubectl delete pod web-0

Three about stateful set

3.1 understanding of statfullset

3.1 the mode of matching Pod name (network identity) is $(statefullset name) - $(sequence number), for example, web-0, web-1,
web-2

3.2 statefullset creates a DNS domain name for each Pod replica. The format of the domain name is $(podname).(headless server
 name), which means that the services communicate through the Pod domain name instead of the Pod IP, because when the Node where the Pod is located fails, the Pod will
 The Pod IP will change when it is moved to other nodes, but the Pod domain name will not change

3.3 statefullset uses Headless service to control the domain name of Pod. The FQDN of this domain name is $(service
 name).$(namespace).svc.cluster.local, where "cluster. Local" refers to the domain name of the cluster

3.4 according to volumeClaimTemplates, create a pvc for each Pod. The naming rule of pvc matches the pattern:
(volumeclaimtemplates. Name) - (Pod? Name), such as volumeMounts.name=www, Pod
 name=web-[0-2], so the PVC created is www-web-0, www-web-1, www-web-2

3.5 deleting Pod will not delete its pvc, and deleting pvc manually will release pv automatically

3.2 statfulset

Orderly deployment: when deploying statefullset, if there are multiple copies of Pod, they will be created sequentially (from 0 to N-1) and all previous pods must be Running and Ready before the next Pod runs.

Orderly delete: when pods are deleted, they are terminated from N-1 to 0.

Orderly extension: when performing an extension operation on a Pod, the Pod in front of it must be in the Running and Ready state, just like the deployment.
Orderly delete:

3.3 use scenario of statefullset:

1. Stable persistent storage, that is, Pod can still access the same persistent data after rescheduling, which is based on PVC.

2. Stable network identifier, i.e. the PodName and HostName of Pod remain unchanged after it is rescheduled.
Orderly deployment, orderly expansion, based on init containers.

3. Orderly shrinkage.

3.4 steps to delete statefullset as a whole

Delete pod first 

kubectl delete pod --all 

Delete svc 

kubectl delete svc --all

Deleting deployment

kubectl delete deploy --all

Deleting statefullset 

kubectl delete statefulset --all

Removing pvc 

kubectl delete pvc --all 

Delete pv 

kubectl delete pv --all

Then delete the files in the nfs mount

Tags: Linux Nginx vim Kubernetes curl

Posted on Wed, 25 Mar 2020 23:41:07 -0400 by Nabster