State change of PV and PVC in K8s


We should be familiar with several states of PV and PVC, but there may be some questions in the use process, such as why PVC has become Lost state, how the newly created PVC can bind the previous PV, and can I restore the previous PV? Here we will explain several state changes in PV and PVC again.

In different cases, the state changes of PV and PVC are described in the table below:

Create PV
Under normal circumstances, after PV is created successfully, it is Available:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/k8s  # Specify nfs mount point
    server: 10.151.30.1  # Specify nfs service address

By directly creating the PV object above, you can see that the status is Available, indicating that it can be used for PVC binding:

$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Available           manual                  7s

New PVC
The newly added PVC state is Pending. If there is a proper PV, the Pending state will immediately change to the Bound state, and the corresponding PVC will also change to the Bound state. PVC and PV are Bound. We can add PVC first, then PV, so that we can see the Pending state.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

To create a new PVC resource object, the newly created one will be in Pending status:

$ kubectl get pvc nfs-pvc
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Pending                                      manual         7s

When PVC finds the appropriate PV binding, it will immediately become Bound, and PV will also change from Available state to Bound state:

$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   1Gi        RWO            manual         2m8s
$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Bound    default/nfs-pvc   manual                  23s

Delete PV
Now that PVC and PV are bound, what happens if we accidentally delete PV at this time

$ kubectl delete pv nfs-pv
persistentvolume "nfs-pv" deleted

^C
$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Terminating   default/nfs-pvc   manual                  12m
$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   1Gi        RWO            manual         13m

In fact, PV deleted here is hung, that is, PV cannot be deleted. However, PV will become Terminating at this time, and the corresponding PVC will still be Bound. That is to say, since PV and PVC have been Bound together, PV cannot be deleted first. Only the current state is Terminating. For PVC Or there is no influence, so what should we do at this time?

We can force the deletion of PV by editing PV and deleting the finalizers attribute in PV:

$ kubectl edit pv nfs-pv
# Delete the contents of the finalizers attribute as shown in the following figure


After editing, PV will be deleted, and PVC will also be in Lost status:

$ kubectl get pv nfs-pv
Error from server (NotFound): persistentvolumes "nfs-pv" not found
$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Lost     nfs-pv   0                         manual         23m

Recreate PV
When we see that PVC is in Lost state, we don't need to worry. This is because there is no PV bound before, but there is still PV binding information in PVC:

So it's easy to solve this problem. You just need to recreate the previous PV:

# Recreate PV
$ kubectl apply -f volume.yaml
persistentvolume/nfs-pv created

When the PV is created successfully, the PVC and PV state will return to the Bound state:

$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Bound    default/nfs-pvc   manual                  93s
# PVC returns to its normal Bound state
$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   1Gi        RWO            manual         27m

Remove PVC
Above is the case of deleting PV first, so what would be the case if we deleted PVC first?

$ kubectl delete pvc nfs-pvc
persistentvolumeclaim "nfs-pvc" deleted
$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Released   default/nfs-pvc   manual                  3m36s

We can see that after the PVC is deleted, the PV becomes Released. However, we can look at the following close attribute carefully, which still retains the binding information of PVC. You can also export the object information of PV through the following command:


At this time, you may think that now my PVC has been deleted and the PV has become Released. Then they can't rebind the PVC before I rebuild. In fact, PVC can only be bound with the PV in the Available state.

At this time, we need to intervene manually. In the real production environment, the administrator will backup or migrate the data, modify the PV, and delete the reference of claimRef to PVC. At this time, when the PV controller of Kubernetes changes from watch to PV, the PV will be changed to the Available state, and the PV in the Available state can be bound by other PVC.

Directly edit PV to delete the content in the cliamRef attribute:

# Delete content in cliamRef
$ kubectl edit pv nfs-pv
persistentvolume/nfs-pv edited


After the deletion, the PV will be in the normal Available state. After the reconstruction, the PVC can be bound normally

$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Available           manual                  12m

In the later version of Kubernetes cluster, various functions of PV have also been enhanced, such as clone, snapshot and other functions are very useful. We will explain these new functions later.

Tags: Attribute Kubernetes snapshot

Posted on Mon, 15 Jun 2020 00:34:40 -0400 by waseembari1985