kubernetes Local PV basic usage and principle

  1. home page
  2. special column
  3. kubernetes
  4. Article details
0

kubernetes Local PV basic usage and principle

River crossing man Published 46 minutes ago

Local PV causes

1. If you use the hostPath Volume method, you must also select a node to schedule
2. You need to create a directory in advance, and pay attention to the configuration of permissions. For example, the directory created by root user can't be used by ordinary users
3. If the size cannot be specified, the disk may be full at any time, and there is no I/O isolation mechanism
4. The statefullset cannot use the hostPath Volume, and the written Helm cannot be compatible with the hostPath Volume

Local PV usage scenario

It is suitable for high priority systems, which need to store data on multiple different nodes, and has high I/O requirements.

Difference between Local PV and conventional PV

For conventional PV, Kubernetes schedules the Pod to a node first, and then persists the Volume directory on the machine. For Local PV, the operation and maintenance personnel need to prepare the disk of the node in advance. When Pod scheduling, the distribution of these Local PV should be considered.

Create Local PV


The local field defined above specifies that it is a Local Persistent Volume; The path field specifies the path of the disk corresponding to the PV. The disk exists on k8s-node01 node, which means that pod must run on this node when using this PV.

Create PVC

Create pod


After the pod is created, you can see that the pod will be scheduled to k8s-node01. At this time, the state of pvc and pv has been bound.

Delete Local PV

When deleting Local PV, because we create PV manually, we need to follow the following process:
1. Delete the Pod using this PV
2. Delete PVC
3. Delete PV

hostPath vs. Local PV

StorageClass delay binding mechanism

The provider field is defined as no provider, because the Local Persistent Volume does not support Dynamic Provisioning to generate PV dynamically, so we need to create PV manually in advance.

The volumeBindingMode field is defined as WaitForFirstConsumer. It is a very important feature in the Local Persistent Volume, that is, delayed binding. Delayed binding means that when we submit a PVC file, StorageClass delays binding the corresponding relationship between PV and PVC for us.

The reason for this is: for example, we have two PVS with the same attribute in the current cluster, which are distributed on different nodes Node1 and Node2, and the Pod we defined needs to run on Node1 node, but the PVC declared by StorageClass for Pod is bound to the PV on Node2. In this way, Pod scheduling will fail, so we need to delay the binding operation of StorageClass.

That is to say, after postponed to the first statement that the Pod that uses the PVC appears after the scheduler, the scheduler takes into account all the scheduling rules, and, of course, the location of each PV node, to decide uniformly which PV of the Pod declaration should be bound to.

Data security risk

Local volume is still limited by node node availability, so it is not applicable to all applications. If the node node becomes unhealthy, the local volume will become inaccessible, and the Pod using this local volume will not run. Applications using local voluems must be able to tolerate this reduced availability and potential data loss. Whether it will really lead to this consequence will depend on the specific implementation of the underlying disk storage and data protection of the node node.

Local PV best practices

<1> For better IO isolation effect, it is recommended to use a whole disk as a storage volume;
<2> In order to isolate the storage space, it is recommended to use an independent disk partition for each storage volume;
<3> When there is still an old PV with an affinity relationship specified for a node node, avoid re creating a node node with the same node name. Otherwise, the system may think that the new node contains the old PV.
<4> For storage volumes with file systems, it is recommended to use their UUIDs (for example, the output of LS - L / dev / disk / by UUID) in the fstab entry and the directory name of the mount mount mount point of the volume. This ensures that the wrong local volume is not mounted even if its device path changes (for example, if / dev/sda1 becomes / dev/sdb1 when a new disk is added). In addition, this approach ensures that if another node with the same name is created, any volumes on that node will still be unique and will not be mistaken for volumes on another node with the same name.
<5> For an original block storage volume without a file system, use its unique ID as the name of the symbolic link. Depending on your environment, the volume ID in / dev / disk / by ID / may contain a unique hardware serial number. Otherwise, you should generate a unique ID yourself. The uniqueness of symbolic link names ensures that if another node with the same name is created, any volumes on that node will remain unique and will not be mistaken for volumes on another node with the same name.

Local PV limitations

When using Local PV for testing, it is impossible to limit the Local PV capacity used by the Pod. The Pod will always use the capacity of the mounted Local PV. Therefore, Local PV does not support dynamic PV space application management. In other words, you need to manually plan the capacity of Local PV. You need to make a global plan for the available local resources, and then divide them into volumes of various sizes and mount them in the automatic discovery directory.
What if one of the storage space allocated by the container is not enough?
It is recommended to use LVM (logical partition management) under Linux to manage the local disk storage space on each node node.
<1> Create a large VG group and put all the available storage space on a node node into it;
<2> According to the expected use of container storage space in the future, some logical volumes LVs are created in batches in advance and mounted in the automatic discovery directory;
<3> Do not use up all the storage resources in the VG, and reserve a small part of the resources for expanding the storage space of individual containers in the future;
<4> Use lvextend to expand the storage volume used by a specific container;

Basic implementation principle

// Run starts all of this controller's control loops
func (ctrl *PersistentVolumeController) Run(stopCh <-chan struct{}) {
    
    ......
    go wait.Until(ctrl.resync, ctrl.resyncPeriod, stopCh)
    go wait.Until(ctrl.volumeWorker, time.Second, stopCh)
    go wait.Until(ctrl.claimWorker, time.Second, stopCh)

    metrics.Register(ctrl.volumes.store, ctrl.claims, &ctrl.volumePluginMgr)

    <-stopCh
}

The above run function is the entry, which mainly starts three goroutine s. The three important methods are resync, volumeWorker and claimWorker. Resync is mainly used to place the synchronized PV and PVC into volumeQueue and claimQueue for consumption by volumeWorker and claimWorker.

volumeWorker

Volumeworker continuously circularly consumes the data in volumeQueue. The main function in volumeworker is the updatevome function. The code is as follows:

// updateVolume runs in worker thread and handles "volume added",
// "volume updated" and "periodic sync" events.
func (ctrl *PersistentVolumeController) updateVolume(volume *v1.PersistentVolume) {
    // Store the new volume version in the cache and do not process it if this
    // is an old version.
    // Update cached Volume
    new, err := ctrl.storeVolumeUpdate(volume)
    if err != nil {
        klog.Errorf("%v", err)
    }
    if !new {
        return
    }
    //Bind PVC and PV according to the current PV object specification
    err = ctrl.syncVolume(volume)
    if err != nil {
        if errors.IsConflict(err) {
            // Version conflict error happens quite often and the controller
            // recovers from it easily.
            klog.V(3).Infof("could not sync volume %q: %+v", volume.Name, err)
        } else {
            klog.Errorf("could not sync volume %q: %+v", volume.Name, err)
        }
    }
}

The updatevome function mainly calls the syncVolume function. The syncVolume function is as follows:

// syncVolume is the main controller method to decide what to do with a volume.
// It's invoked by appropriate cache.Controller callbacks when a volume is
// created, updated or periodically synced. We do not differentiate between
// these events.
func (ctrl *PersistentVolumeController) syncVolume(volume *v1.PersistentVolume) error {
    klog.V(4).Infof("synchronizing PersistentVolume[%s]: %s", volume.Name, getVolumeStatusForLogging(volume))

    // Set correct "migrated-to" annotations on PV and update in API server if
    // necessary
    newVolume, err := ctrl.updateVolumeMigrationAnnotations(volume)
    if err != nil {
        // Nothing was saved; we will fall back into the same
        // condition in the next call to this method
        return err
    }
    volume = newVolume

    // [Unit test set 4]
    //If the claimRef is not found, it is an unused PV. Call the updatevomephase function to change the state of the volume to available
    if volume.Spec.ClaimRef == nil {
        // Volume is unused
        klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is unused", volume.Name)
        if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
            // Nothing was saved; we will fall back into the same
            // condition in the next call to this method
            return err
        }
        return nil
    } else /* pv.Spec.ClaimRef != nil */ {
        // Volume is bound to a claim.
        //Updating volume status to available during binding
        if volume.Spec.ClaimRef.UID == "" {
            // The PV is reserved for a PVC; that PVC has not yet been
            // bound to this PV; the PVC sync will handle it.
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is pre-bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
            if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
                // Nothing was saved; we will fall back into the same
                // condition in the next call to this method
                return err
            }
            return nil
        }
        klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
        // Get the PVC by _name_
        var claim *v1.PersistentVolumeClaim
        claimName := claimrefToClaimKey(volume.Spec.ClaimRef)
        //Get PVC
        obj, found, err := ctrl.claims.GetByKey(claimName)
        if err != nil {
            return err
        }
        //If not found, resynchronize PVC
        if !found {
            // If the PV was created by an external PV provisioner or
            // bound by external PV binder (e.g. kube-scheduler), it's
            // possible under heavy load that the corresponding PVC is not synced to
            // controller local cache yet. So we need to double-check PVC in
            //   1) informer cache
            //   2) apiserver if not found in informer cache
            // to make sure we will not reclaim a PV wrongly.
            // Note that only non-released and non-failed volumes will be
            // updated to Released state when PVC does not exist.
            if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
                obj, err = ctrl.claimLister.PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(volume.Spec.ClaimRef.Name)
                if err != nil && !apierrors.IsNotFound(err) {
                    return err
                }
                found = !apierrors.IsNotFound(err)
                if !found {
                    //Retrieve PVC
                    obj, err = ctrl.kubeClient.CoreV1().PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(context.TODO(), volume.Spec.ClaimRef.Name, metav1.GetOptions{})
                    if err != nil && !apierrors.IsNotFound(err) {
                        return err
                    }
                    found = !apierrors.IsNotFound(err)
                }
            }
        }
        //Still no PVC found
        if !found {
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
            // Fall through with claim = nil
        } else {
            var ok bool
            claim, ok = obj.(*v1.PersistentVolumeClaim)
            if !ok {
                return fmt.Errorf("cannot convert object from volume cache to volume %q!?: %#v", claim.Spec.VolumeName, obj)
            }
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s found: %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), getClaimStatusForLogging(claim))
        }
        //If the if condition holds, the PVC specified in volume is deleted, another PVC with the same name is created, the latest PVC is obtained, and then they are compared
        if claim != nil && claim.UID != volume.Spec.ClaimRef.UID {
            // The claim that the PV was pointing to was deleted, and another
            // with the same name created.
            // in some cases, the cached claim is not the newest, and the volume.Spec.ClaimRef.UID is newer than cached.
            // so we should double check by calling apiserver and get the newest claim, then compare them.
            klog.V(4).Infof("Maybe cached claim: %s is not the newest one, we should fetch it from apiserver", claimrefToClaimKey(volume.Spec.ClaimRef))

            claim, err = ctrl.kubeClient.CoreV1().PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(context.TODO(), volume.Spec.ClaimRef.Name, metav1.GetOptions{})
            if err != nil && !apierrors.IsNotFound(err) {
                return err
            } else if claim != nil { //Rebinding pvc with volume
                // Treat the volume as bound to a missing claim.
                if claim.UID != volume.Spec.ClaimRef.UID {
                    klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s has a newer UID than pv.ClaimRef, the old one must have been deleted", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
                    claim = nil
                } else {
                    klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s has a same UID with pv.ClaimRef", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
                }
            }
        }
        //pvc may be deleted,
        if claim == nil {
            // If we get into this block, the claim must have been deleted;
            // NOTE: reclaimVolume may either release the PV back into the pool or
            // recycle it or do nothing (retain)

            // Do not overwrite previous Failed state - let the user see that
            // something went wrong, while we still re-try to reclaim the
            // volume.
            if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
                // Also, log this only once:
                klog.V(2).Infof("volume %q is released and reclaim policy %q will be executed", volume.Name, volume.Spec.PersistentVolumeReclaimPolicy)
                if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
                    // Nothing was saved; we will fall back into the same condition
                    // in the next call to this method
                    return err
                }
            }
            if err = ctrl.reclaimVolume(volume); err != nil {
                // Release failed, we will fall back into the same condition
                // in the next call to this method
                return err
            }
            if volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimRetain {
                // volume is being retained, it references a claim that does not exist now.
                klog.V(4).Infof("PersistentVolume[%s] references a claim %q (%s) that is not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), volume.Spec.ClaimRef.UID)
            }
            return nil
        } else if claim.Spec.VolumeName == "" {
            if pvutil.CheckVolumeModeMismatches(&claim.Spec, &volume.Spec) {
                // Binding for the volume won't be called in syncUnboundClaim,
                // because findBestMatchForClaim won't return the volume due to volumeMode mismatch.
                volumeMsg := fmt.Sprintf("Cannot bind PersistentVolume to requested PersistentVolumeClaim %q due to incompatible volumeMode.", claim.Name)
                ctrl.eventRecorder.Event(volume, v1.EventTypeWarning, events.VolumeMismatch, volumeMsg)
                claimMsg := fmt.Sprintf("Cannot bind PersistentVolume %q to requested PersistentVolumeClaim due to incompatible volumeMode.", volume.Name)
                ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.VolumeMismatch, claimMsg)
                // Skipping syncClaim
                return nil
            }

            if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
                // The binding is not completed; let PVC sync handle it
                klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume not bound yet, waiting for syncClaim to fix it", volume.Name)
            } else {
                // Dangling PV; try to re-establish the link in the PVC sync
                klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it", volume.Name)
            }
            // In both cases, the volume is Bound and the claim is Pending.
            // Next syncClaim will fix it. To speed it up, we enqueue the claim
            // into the controller, which results in syncClaim to be called
            // shortly (and in the right worker goroutine).
            // This speeds up binding of provisioned volumes - provisioner saves
            // only the new PV and it expects that next syncClaim will bind the
            // claim to it.
            ctrl.claimQueue.Add(claimToClaimKey(claim))
            return nil
        } else if claim.Spec.VolumeName == volume.Name { // Volume is bound with pvc to update the volume status
            // Volume is bound to a claim properly, update status if necessary
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: all is bound", volume.Name)
            if _, err = ctrl.updateVolumePhase(volume, v1.VolumeBound, ""); err != nil {
                // Nothing was saved; we will fall back into the same
                // condition in the next call to this method
                return err
            }
            return nil
        } else { //PV is bound to PVC, but PVC is bound to other PVS, reset
            // Volume is bound to a claim, but the claim is bound elsewhere
            if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnDynamicallyProvisioned) && volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
                // This volume was dynamically provisioned for this claim. The
                // claim got bound elsewhere, and thus this volume is not
                // needed. Delete it.
                // Mark the volume as Released for external deleters and to let
                // the user know. Don't overwrite existing Failed status!
                if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
                    // Also, log this only once:
                    klog.V(2).Infof("dynamically volume %q is released and it will be deleted", volume.Name)
                    if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
                        // Nothing was saved; we will fall back into the same condition
                        // in the next call to this method
                        return err
                    }
                }
                if err = ctrl.reclaimVolume(volume); err != nil {
                    // Deletion failed, we will fall back into the same condition
                    // in the next call to this method
                    return err
                }
                return nil
            } else {
                // Volume is bound to a claim, but the claim is bound elsewhere
                // and it's not dynamically provisioned.
                if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
                    // This is part of the normal operation of the controller; the
                    // controller tried to use this volume for a claim but the claim
                    // was fulfilled by another volume. We did this; fix it.
                    klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by controller to a claim that is bound to another volume, unbinding", volume.Name)
                    if err = ctrl.unbindVolume(volume); err != nil {
                        return err
                    }
                    return nil
                } else {
                    // The PV must have been created with this ptr; leave it alone.
                    klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by user to a claim that is bound to another volume, waiting for the claim to get unbound", volume.Name)
                    // This just updates the volume phase and clears
                    // volume.Spec.ClaimRef.UID. It leaves the volume pre-bound
                    // to the claim.
                    if err = ctrl.unbindVolume(volume); err != nil {
                        return err
                    }
                    return nil
                }
            }
        }
    }
}

The above code is relatively long, and the main logic is as follows:
First, judge whether the claimRef of PV is empty. If it is empty, update PV to available status. If claimRef is not empty, but UID is empty, it means that PV is bound to PVC, but PVC is not bound to PV, so you need to set PV status to available. Then get the PVC corresponding to PV. In order to prevent the local cache from updating the PVC, get it again through apiServer.

If the corresponding PVC is found, then compare whether the UID s are equal. If they are not equal, it means that it is not the corresponding bound PVC. The PVC may be deleted and the status of the updated PV is released. At this time, the reclaimVolume method will be called, and the corresponding processing will be carried out according to the persistentVolumeReclaimPolicy.

After verifying claim, continue to check whether claim.Spec.VolumeName is empty. If it is empty, it indicates that it is being bound.

If claim.Spec.VolumeName == volume.Name, volume is Bound to PVC and the pv status is updated to Bound.

The remaining part of the logic is that the PV is bound to the PVC, but the PVC is bound to other PVS. Check whether it is automatically generated by dynamically provisioned. If so, release the PV; If the PV is created manually, call unbindVolume to unbind it

The above is the main working logic of VolumeWoker.

Let's take a look at the working logic of ClaimWorker:
claimWorker also continuously synchronizes PVC, and then calls the syncClaim method through updateClaim.

// syncClaim is the main controller method to decide what to do with a claim.
// It's invoked by appropriate cache.Controller callbacks when a claim is
// created, updated or periodically synced. We do not differentiate between
// these events.
// For easier readability, it was split into syncUnboundClaim and syncBoundClaim
// methods.
func (ctrl *PersistentVolumeController) syncClaim(claim *v1.PersistentVolumeClaim) error {
    klog.V(4).Infof("synchronizing PersistentVolumeClaim[%s]: %s", claimToClaimKey(claim), getClaimStatusForLogging(claim))

    // Set correct "migrated-to" annotations on PVC and update in API server if
    // necessary
    newClaim, err := ctrl.updateClaimMigrationAnnotations(claim)
    if err != nil {
        // Nothing was saved; we will fall back into the same
        // condition in the next call to this method
        return err
    }
    claim = newClaim

    if !metav1.HasAnnotation(claim.ObjectMeta, pvutil.AnnBindCompleted) {
        return ctrl.syncUnboundClaim(claim)
    } else {
        return ctrl.syncBoundClaim(claim)
    }
}

The main logic of the syncClaim method is to bind and unbind through the syncUnboundClaim and syncBoundClaim methods.
The main logic of syncUnboundClaim method is divided into two parts: one is when claim.Spec.VolumeName = = "", and the code is as follows:

// syncUnboundClaim is the main controller method to decide what to do with an
// unbound claim.
func (ctrl *PersistentVolumeController) syncUnboundClaim(claim *v1.PersistentVolumeClaim) error {
    // This is a new PVC that has not completed binding
    // OBSERVATION: pvc is "Pending"
    //pending status, binding operation not completed
    if claim.Spec.VolumeName == "" {
        // User did not care which PV they get.
        //Whether it is delayed binding involves the delayed binding operation of Local PV
        delayBinding, err := pvutil.IsDelayBindingMode(claim, ctrl.classLister)
        if err != nil {
            return err
        }

        // [Unit test set 1]
        //Find the appropriate PV according to claim's statement, which involves delayed binding. If you look down this method, you will see that the corresponding PV will be found through accessMode,
        //Then find the appropriate PV through pvutil.FindMatchingVolume, which will be used by PV controller and scheduler,
        //It is used by the scheduler because it involves the delay binding of LocalPV. When scheduling, various factors will be comprehensively considered to select the most appropriate node to run pod
        volume, err := ctrl.volumes.findBestMatchForClaim(claim, delayBinding)
        if err != nil {
            klog.V(2).Infof("synchronizing unbound PersistentVolumeClaim[%s]: Error finding PV for claim: %v", claimToClaimKey(claim), err)
            return fmt.Errorf("error finding PV for claim %q: %w", claimToClaimKey(claim), err)
        }
        //If no volume is available
        if volume == nil {
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: no volume found", claimToClaimKey(claim))
            // No PV could be found
            // OBSERVATION: pvc is "Pending", will retry
            switch {
            case delayBinding && !pvutil.IsDelayBindingProvisioning(claim):
                if err = ctrl.emitEventForUnboundDelayBindingClaim(claim); err != nil {
                    return err
                }
                //Create PV according to the corresponding plug-in
            case storagehelpers.GetPersistentVolumeClaimClass(claim) != "":
                if err = ctrl.provisionClaim(claim); err != nil {
                    return err
                }
                return nil
            default:
                ctrl.eventRecorder.Event(claim, v1.EventTypeNormal, events.FailedBinding, "no persistent volumes available for this claim and no storage class is set")
            }

            // Mark the claim as Pending and try to find a match in the next
            // periodic syncClaim
            //Find the matching PV for binding in the next cycle
            if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {
                return err
            }
            return nil
        } else /* pv != nil */ {
            // Found a PV for this claim
            // OBSERVATION: pvc is "Pending", pv is "Available"
            claimKey := claimToClaimKey(claim)
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q found: %s", claimKey, volume.Name, getVolumeStatusForLogging(volume))
            if err = ctrl.bind(volume, claim); err != nil {
                // On any error saving the volume or the claim, subsequent
                // syncClaim will finish the binding.
                // record count error for provision if exists
                // timestamp entry will remain in cache until a success binding has happened
                metrics.RecordMetric(claimKey, &ctrl.operationTimestamps, err)
                return err
            }
            // OBSERVATION: claim is "Bound", pv is "Bound"
            // if exists a timestamp entry in cache, record end to end provision latency and clean up cache
            // End of the provision + binding operation lifecycle, cache will be cleaned by "RecordMetric"
            // [Unit test 12-1, 12-2, 12-4]
            metrics.RecordMetric(claimKey, &ctrl.operationTimestamps, nil)
            return nil
        }
    }

The main processing logic is to see whether a suitable PV can be found and bind it if so. If not, check whether the PV is dynamically provided. If so, create the PV, set the PVC to pending, and check the binding in the next cycle.
The logic of the lower half of the syncUnboundClaim method:

// syncUnboundClaim is the main controller method to decide what to do with an
// unbound claim.
func (ctrl *PersistentVolumeController) syncUnboundClaim(claim *v1.PersistentVolumeClaim) error {
else /* pvc.Spec.VolumeName != nil */ {
        // [Unit test set 2]
        // User asked for a specific PV.
        klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q requested", claimToClaimKey(claim), claim.Spec.VolumeName)
        //volume is not empty, find the corresponding PV
        obj, found, err := ctrl.volumes.store.GetByKey(claim.Spec.VolumeName)
        if err != nil {
            return err
        }
        //The corresponding PV does not exist, and the update status is Pending
        if !found {
            // User asked for a PV that does not exist.
            // OBSERVATION: pvc is "Pending"
            // Retry later.
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q requested and not found, will try again next time", claimToClaimKey(claim), claim.Spec.VolumeName)
            if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {
                return err
            }
            return nil
        } else {
            volume, ok := obj.(*v1.PersistentVolume)
            if !ok {
                return fmt.Errorf("cannot convert object from volume cache to volume %q!?: %+v", claim.Spec.VolumeName, obj)
            }
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q requested and found: %s", claimToClaimKey(claim), claim.Spec.VolumeName, getVolumeStatusForLogging(volume))
            if volume.Spec.ClaimRef == nil { //If the Claim of PV corresponding to PVC is empty, call the bind method to bind.
                // User asked for a PV that is not claimed
                // OBSERVATION: pvc is "Pending", pv is "Available"
                klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume is unbound, binding", claimToClaimKey(claim))
                if err = checkVolumeSatisfyClaim(volume, claim); err != nil {
                    klog.V(4).Infof("Can't bind the claim to volume %q: %v", volume.Name, err)
                    // send an event
                    msg := fmt.Sprintf("Cannot bind to requested volume %q: %s", volume.Name, err)
                    ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.VolumeMismatch, msg)
                    // volume does not satisfy the requirements of the claim
                    if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {
                        return err
                    }
                } else if err = ctrl.bind(volume, claim); err != nil {
                    // On any error saving the volume or the claim, subsequent
                    // syncClaim will finish the binding.
                    return err
                }
                // OBSERVATION: pvc is "Bound", pv is "Bound"
                return nil
                // Verify whether volume has been bound to other PVC. If not, perform binding
            } else if pvutil.IsVolumeBoundToClaim(volume, claim) {
                // User asked for a PV that is claimed by this PVC
                // OBSERVATION: pvc is "Pending", pv is "Bound"
                klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume already bound, finishing the binding", claimToClaimKey(claim))

                // Finish the volume binding by adding claim UID.
                if err = ctrl.bind(volume, claim); err != nil {
                    return err
                }
                // OBSERVATION: pvc is "Bound", pv is "Bound"
                return nil
            } else { //The PV declared by PVC binds other PVC and waits for the next cycle
                // User asked for a PV that is claimed by someone else
                // OBSERVATION: pvc is "Pending", pv is "Bound"
                if !metav1.HasAnnotation(claim.ObjectMeta, pvutil.AnnBoundByController) {
                    klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume already bound to different claim by user, will retry later", claimToClaimKey(claim))
                    claimMsg := fmt.Sprintf("volume %q already bound to a different claim.", volume.Name)
                    ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.FailedBinding, claimMsg)
                    // User asked for a specific PV, retry later
                    if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {
                        return err
                    }
                    return nil
                } else {
                    // This should never happen because someone had to remove
                    // AnnBindCompleted annotation on the claim.
                    klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume already bound to different claim %q by controller, THIS SHOULD NEVER HAPPEN", claimToClaimKey(claim), claimrefToClaimKey(volume.Spec.ClaimRef))
                    claimMsg := fmt.Sprintf("volume %q already bound to a different claim.", volume.Name)
                    ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.FailedBinding, claimMsg)

                    return fmt.Errorf("invalid binding of claim %q to volume %q: volume already claimed by %q", claimToClaimKey(claim), claim.Spec.VolumeName, claimrefToClaimKey(volume.Spec.ClaimRef))
                }
            }
        }
    }
}

The logic in the lower part of syncUnboundClaim is mainly to judge that volume is not empty and take out the corresponding PV for binding. If claimRef is not empty, check whether other PVC has been bound. If not, perform binding.

syncClaim in addition to the above syncUnboundClaim, there is also a syncBoundClaim method,
The syncBoundClaim method is mainly used to handle various exceptions in which PVC and PV have been bound. The code is not posted.

Reading 8 was published 46 minutes ago
Like collection
1 prestige
0 fans
Focus on the author
Submit comments
You know what?

Register login
1 prestige
0 fans
Focus on the author
Article catalog
follow
Billboard

Tags: Kubernetes

Posted on Thu, 28 Oct 2021 23:04:11 -0400 by bionicdonkey