Kubernetes Deployment source code analysis

summary

Deployment   Is the most commonly used Kubernetes native   Workload   As one of the resources, when we first try to use Kubernetes, the high probability starts from running a deployment type workload. Today, we plan to divide into several parts to deeply analyze the deployment resources and the deployment controller from the aspects of deployment feature introduction and source code analysis.

Everyone is familiar with the basic features of Deployment. Therefore, we do not plan to repeat all the functional details of Deployment in this article. Instead, we will start with the features that are not too basic, such as rolling update, to see what gameplay Deployment supports, so as to prepare for the later analysis of the source code.

Deployment Foundation

Let's create a simple Deployment and look at some small details.

Create Deployment

Taking running nginx as an example, we can use Deployment to pull up the nginx load of a 3-copy: nginx DP

  • nginx-dp.yaml
 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: nginx-dp
 5  labels:
 6    app: nginx
 7spec:
 8  replicas: 3
 9  selector:
10    matchLabels:
11      app: nginx
12  template:
13    metadata:
14      labels:
15        app: nginx
16    spec:
17      containers:
18      - name: nginx
19        image: nginx:1.14.2
20        ports:
21        - containerPort: 80

adopt   kubectl create -f nginx-dp.yaml   We can create this Deployment resource.

1# kubectl create -f nginx-dp.yaml
2deployment.apps/nginx-dp created
3# kubectl get deploy
4NAME       READY   UP-TO-DATE   AVAILABLE   AGE
5nginx-dp   1/3     3            1           3s
6# kubectl get deploy
7NAME       READY   UP-TO-DATE   AVAILABLE   AGE
8nginx-dp   3/3     3            3           10s

Wait a minute, you can pass   kubectl get deploy   The command sees that all pods are up. Here, pay attention to the meaning of the output field (needless to say NAME and AGE):

  • UP-TO-DATE: how many copies have been updated to the desired state
  • AVAILABLE: how many copies are already AVAILABLE
  • READY: number of replicas that can be served / expected number of replicas

ReplicaSet

  • Query ReplicaSet
1# kubectl get rs --selector=app=nginx
2NAME                  DESIRED   CURRENT   READY   AGE
3nginx-dp-66b6c48dd5   3         3         3       9m54s

After creating the Deployment, we can see that there is an additional ReplicaSet resource in the cluster, that is, the Deployment actually manages the ReplicaSet rather than the Pod directly. Let's continue to look at the definition of the ReplicaSet to verify this idea:

 1# kubectl get rs nginx-dp-66b6c48dd5 -o yaml
 2apiVersion: apps/v1
 3kind: ReplicaSet
 4// ......
 5  ownerReferences:
 6  - apiVersion: apps/v1
 7    blockOwnerDeletion: true
 8    controller: true
 9    kind: Deployment
10    name: nginx-dp
11    uid: 97736b65-0171-4916-bb18-feccc343ac14
12  resourceVersion: "1099157"
13  uid: 83ac5660-28eb-4d40-beb1-cb5ceb6928b6
14// ......

Here you can see that the ReplicaSet belongs to the nginx DP resource of Deployment type. In the same way, you can see that the corresponding Pod is managed by ReplicaSet.

Here, we can guess   Deployment Controller   According to the implementation principle of, it can be imagined that it indirectly completes the management of Pod life cycle by managing the life cycle of ReplicaSet and with the help of the capabilities provided by ReplicaSet Controller; In addition, you can create multiple ReplicaSet resources and control the number of copies to realize operations such as rolling update and rollback. In this way, the implementation logic of the deployment controller is relatively "high-level".

Rolling update

  • adopt   kubectl set   Command to update the mirror:
1# kubectl set image deployment/nginx-dp nginx=nginx:1.16.1
2deployment.apps/nginx-dp image updated
  • View Event
 1# kubectl describe deploy nginx-dp
 2// ......
 3
 4Events:
 5  Type    Reason             Age   From                   Message
 6  ----    ------             ----  ----                   -------
 7  Normal  ScalingReplicaSet  26m   deployment-controller  Scaled up replica set nginx-dp-66b6c48dd5 to 3
 8  Normal  ScalingReplicaSet  88s   deployment-controller  Scaled up replica set nginx-dp-559d658b74 to 1
 9  Normal  ScalingReplicaSet  87s   deployment-controller  Scaled down replica set nginx-dp-66b6c48dd5 to 2
10  Normal  ScalingReplicaSet  87s   deployment-controller  Scaled up replica set nginx-dp-559d658b74 to 2
11  Normal  ScalingReplicaSet  86s   deployment-controller  Scaled down replica set nginx-dp-66b6c48dd5 to 1
12  Normal  ScalingReplicaSet  86s   deployment-controller  Scaled up replica set nginx-dp-559d658b74 to 3
13  Normal  ScalingReplicaSet  84s   deployment-controller  Scaled down replica set nginx-dp-66b6c48dd5 to 0

From the Event, it can be seen that the deployment controller has completed this rolling update by adjusting the number of replicas of ReplicaSet resources nginx-dp-66b6c48dd5 and nginx-dp-559d658b74. First look at these two replicasets:

1# kubectl get rs --selector=app=nginx
2NAME                  DESIRED   CURRENT   READY   AGE
3nginx-dp-559d658b74   3         3         3       134m
4nginx-dp-66b6c48dd5   0         0         0       159m

You can see that at this time, an nginx-dp-559d658b74 is added, the number of copies is 3, and the old nginx-dp-66b6c48dd5 becomes a 0 copy. The process is roughly as follows:

  1. Nginx-dp-66b6c48dd5 to 3 / replica set nginx-dp-559d658b74 to 1 - > add a replica to 1 for new rs; Total 4 copies
  2. Scaled down replica set nginx-dp-66b6c48dd5 to 2 - > reduce one replica of the old rs to 2; Total 3 copies
  3. Scaled up replica set nginx-dp-559d658b74 to 2 - > add a replica to 2 for new rs; Total 4 copies
  4. Scaled down replica set nginx-dp-66b6c48dd5 to 1 - > old rs reduces one replica to 1; Total 3 copies
  5. Scaled up replica set nginx-dp-559d658b74 to 3 - > add a replica to 3; Total 4 copies
  6. Scaled down replica set nginx-dp-66b6c48dd5 to 0 - > reduce one replica of the old rs to 0; Total 3 copies

Failed rollback

Historical version

Let's first look at how to query the update history:

1# kubectl rollout history deployments/nginx-dp
2deployment.apps/nginx-dp
3REVISION  CHANGE-CAUSE
41         <none>
52         <none>

You can see a detail here. Change-case is empty. In fact, this field is from   kubernetes.io/change-cause   Let's add a note to try:

1kubectl annotate deployment/nginx-dp kubernetes.io/change-cause="image updated to 1.16.1"

Check again:

1# kubectl rollout history deployments/nginx-dp
2deployment.apps/nginx-dp
3REVISION  CHANGE-CAUSE
41         <none>
52         image updated to 1.16.1

What about the first version? It can be guessed here that the change-case information of multiple versions should be stored. This annotation should be used in the ReplicaSet, so we try to supplement the annotation of the first version in this way:

1kubectl annotate rs/nginx-dp-66b6c48dd5 kubernetes.io/change-cause="nginx deployment created"

Check again:

1# kubectl rollout history deployments/nginx-dp
2deployment.apps/nginx-dp
3REVISION  CHANGE-CAUSE
41         nginx deployment created
52         image updated to 1.16.1

Now it's more harmonious. You need to specify that you don't get lost when rolling back the version.

RollBACK

  • Set a nonexistent image version to simulate the update failure scenario:
 1# kubectl set image deployment/nginx-dp nginx=nginx:1.161
 2deployment.apps/nginx-dp image updated
 3# kubectl get rs --selector=app=nginx
 4NAME                  DESIRED   CURRENT   READY   AGE
 5nginx-dp-559d658b74   3         3         3       168m
 6nginx-dp-66b6c48dd5   0         0         0       3h13m
 7nginx-dp-66bc5d6c8    1         1         0       6s
 8# kubectl get pod --selector=app=nginx
 9NAME                        READY   STATUS             RESTARTS   AGE
10nginx-dp-559d658b74-l4bq7   1/1     Running            0          170m
11nginx-dp-559d658b74-qhh8m   1/1     Running            0          170m
12nginx-dp-559d658b74-vbtl5   1/1     Running            0          170m
13nginx-dp-66bc5d6c8-tl848    0/1     ImagePullBackOff   0          2m2s
  • Set a comment:
1# kubectl annotate deployment/nginx-dp kubernetes.io/change-cause="image updated to 1.161"
2deployment.apps/nginx-dp annotated
3# kubectl rollout history deployments/nginx-dp
4deployment.apps/nginx-dp
5REVISION  CHANGE-CAUSE
61         nginx deployment created
72         image updated to 1.16.1
83         image updated to 1.161
  • Rollback to revision 2:
1# kubectl rollout undo deployment/nginx-dp
2deployment.apps/nginx-dp rolled back
3# kubectl rollout history deployments/nginx-dp
4deployment.apps/nginx-dp
5REVISION  CHANGE-CAUSE
61         nginx deployment created
73         image updated to 1.161
84         image updated to 1.16.1

At this time, version 2 becomes the latest version: 4

  • To view the detailed configuration of a version:
 1# kubectl rollout history deployments/nginx-dp --revision=1
 2deployment.apps/nginx-dp with revision #1
 3Pod Template:
 4  Labels:    app=nginx
 5    pod-template-hash=66b6c48dd5
 6  Annotations:    kubernetes.io/change-cause: nginx deployment created
 7  Containers:
 8   nginx:
 9    Image:    nginx:1.14.2
10    Port:    80/TCP
11    Host Port:    0/TCP
12    Environment:    <none>
13    Mounts:    <none>
14  Volumes:    <none>
  • Specify version rollback:
1# kubectl rollout undo deployment/nginx-dp --to-revision=1
2deployment.apps/nginx-dp rolled back
3# kubectl rollout history deployments/nginx-dp
4deployment.apps/nginx-dp
5REVISION  CHANGE-CAUSE
63         image updated to 1.161
74         image updated to 1.16.1
85         nginx deployment created

Other features

Finally, let's look at all the properties of the Deployment type spec:

  • minReadySeconds: the default value is 0, indicating how long services can be provided after a pod delay; In other words, 1 is configured as pod ready, and the service is not provided until 1s later;
  • paused: suspended;
  • progressDeadlineSeconds: 600 by default, indicating the timeout for processing a Deployment task. For example, if the upgrade is not successful after 10 minutes, it is marked as failed;
  • Replicas: number of replicas;
  • revisionHistoryLimit: the default value is 10, indicating the number of historical versions retained;
  • Selector: label selector;
  • Strategy: indicates the replacement strategy when the Deployment updates the pod;
  • template: Pod   Formwork;

there   strategy   There are two attributes: type   and   RollingUpdate. The optional values of type are "Recreate" and "RollingUpdate". The default is "RollingUpdate". strategy.rollingUpdate has two properties:

  • maxSurge: indicates that the maximum number of copies can be more than the expected number of copies during rolling update. The number or percentage configuration is OK; For example, 1 means that at most one copy can be added at the same time during the update process, and then one new copy can be added only after an old copy is deleted; When calculating the percentage, round it up;
  • maxUnavailable: indicates how many copies can be unavailable during rolling update, which is also a digital or percentage configuration; For example, if the expected number of replicas is 3, 1 means that you can delete up to the remaining 2, and then you can continue to delete until a new replica is created; Rounding down when calculating the percentage;

Summary

The purpose of this article is to know all the features of Deployment, so as to prepare for the later source code analysis. In this process, we do not repeat the basic features of Deployment, but mainly introduce the main functions such as "rolling update" and "rollback". In addition, we briefly talk about the full configuration items contained in the spec of Deployment, so as to have a concept in mind and know where the capability boundary of Deployment is, so that we can be more targeted when looking at the source code later.

Tags: Java Kubernetes Programmer architecture Container

Posted on Tue, 02 Nov 2021 10:36:32 -0400 by MerMer