There are several different ways to publish an app in Kubernetes, so it is important to choose the right publishing strategy in order for the app to provide services smoothly during the upgrade.Choosing the right deployment strategy depends on our business needs. Here are some strategies that might be used:
- Recreate: Stop the old version from deploying the new version
- Rolling-update: publishing new versions one by one with rolling updates
- Blue/green: New version exists with old version, then switch traffic
- Canary: Publish the new version to a subset of users, then continue publishing in full
- A/B test: Publish a new version to some users in a precise way (HTTP header, cookie, weight, etc.).A/B testing is actually a technique for making business decisions based on data statistics.There is no native support in Kubernetes and additional advanced components are required to complete setup changes (such as Istio, Linkerd, Traefik, or custom Nginx/Haproxy)
Recreate - preferably in a development environment
The policy is defined as Recreate's Deployment, which terminates all running instances and recreates them with a newer version.
spec: replicas: 3 strategy: type: Recreate
The recreate policy is a virtual deployment that includes closing version A and then deploying version B after closing version A. This technology means that the downtime of a service depends on the shutdown and startup duration of the application.
Here we create two related resource manifest files, app-v1.yaml:
apiVersion: v1 kind: Service metadata: name: my-app labels: app: my-app spec: type: NodePort ports: - name: http port: 80 targetPort: http selector: app: my-app --- apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app strategy: type: Recreate selector: matchLabels: app: my-app template: metadata: labels: app: my-app version: v1.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v1.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe periodSeconds: 5
The app-v2.yaml file is as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: replicas: 3 strategy: type: Recreate selector: matchLabels: app: my-app template: metadata: labels: app: my-app version: v2.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v2.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe periodSeconds: 5
The Deployment definitions in the above two resource manifest files are almost always the same. The only difference is that the environment variable VERSION values defined are different. Follow these steps to verify the Recreate policy:
1.Version 1 provides services
2. Delete version 1
3. Deployment Version 2
4. Wait for all copies to be ready
First deploy the first application:
$ kubectl apply -f app-v1.yaml service "my-app" created deployment.apps "my-app" created
Test version 1 deployed successfully:
$ kubectl get pods -l app=my-app NAME READY STATUS RESTARTS AGE my-app-7b4874cd75-m5kct 1/1 Running 0 19m my-app-7b4874cd75-pc444 1/1 Running 0 19m my-app-7b4874cd75-tlctl 1/1 Running 0 19m $ kubectl get svc my-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-app NodePort 10.108.238.76 <none> 80:32532/TCP 5m $ curl http://127.0.0.1:32532 Host: my-app-7b4874cd75-pc444, Version: v1.0.0
You can see that the version 1 application is working correctly.To see how your deployment works, open a new terminal and run the following command:
$ watch kubectl get po -l app=my-app
Then deploy version 2 applications:
$ kubectl apply -f app-v2.yaml
This time you can watch the changes in the Pod list in the newly opened terminal above. You can see that all three previous Pods are in Terminating state, and all three Pods are deleted before you start creating a new Pod.
Then test the deployment progress of the second version of the application:
$ while sleep 0.1; do curl http://127.0.0.1:32532; done curl: (7) Failed connect to 127.0.0.1:32532; Connection refused curl: (7) Failed connect to 127.0.0.1:32532; Connection refused ...... Host: my-app-f885c8d45-sp44p, Version: v2.0.0 Host: my-app-f885c8d45-t8g7g, Version: v2.0.0 Host: my-app-f885c8d45-sp44p, Version: v2.0.0 ......
You can see that services are inaccessible at the beginning, and then access will not work until the second version of the application is deployed successfully. You can see that the data you are accessing is version 2 now.
Finally, you can empty the resource object above by executing the following command:
$ kubectl delete all -l app=my-app
Conclusion:
- Apply Status Update All
- Downtime depends on how long the application takes to close and start
Rolling-update
Rolling updates gradually deploy new versions of the application by replacing instances one by one until all instances have been replaced.It typically follows the following process: use the instance pool of version A behind the load balancer, then deploy an instance of version B, add the instance to the instance pool when the service is ready to receive traffic (Readiness Probe works), then delete an instance of version A from the instance pool and close it, as shown in the following figure:
The following is a diagram of how receive traffic is applied during a rolling update process:
Here are the key parameters for rolling updates through Deployment in Kubernetes:
pec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 # How many Pod s can be added at a time maxUnavailable: 1 # Maximum number of Pod s not available during rolling updates
Still using the resource manifest file app-v1.yaml above, create a new resource manifest file, app-v2-rolling-update.yaml, which defines rolling updates, as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: replicas: 10 # Setting maxUnavailable to 0 fully ensures that the service is not affected during rolling updates, and you can set it using a percentage value. strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: app: my-app template: metadata: labels: app: my-app version: v2.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v2.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe # Setting a high point with an initial delay gives you a better view of the rolling update process initialDelaySeconds: 15 periodSeconds: 5
In the resource list above, we define version 2 in the environment variable, and then define that Deployment uses a rolling update strategy to update by setting strategy.type=RollingUpdate
With this, we will follow the steps below to verify the rolling update strategy:
1.Version 1 provides services 2. Deployment Version 2 3. Wait until all copies have been replaced by version 2
Similarly, deploy version 1 applications first:
$ kubectl apply -f app-v1.yaml service "my-app" created deployment.apps "my-app" created
Test version 1 deployed successfully:
$ kubectl get pods -l app=my-app NAME READY STATUS RESTARTS AGE my-app-7b4874cd75-h8c4d 1/1 Running 0 47s my-app-7b4874cd75-p4l8f 1/1 Running 0 47s my-app-7b4874cd75-qnt7p 1/1 Running 0 47s $ kubectl get svc my-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-app NodePort 10.109.99.184 <none> 80:30486/TCP 1m $ curl http://127.0.0.1:30486 Host: my-app-7b4874cd75-qnt7p, Version: v1.0.0
Then deploy the rollover update version 2 application:
$ kubectl apply -f app-v2-rolling-update.yaml deployment.apps "my-app" configured
At this time, you can see a lot of Pods in the watch terminal above. It is still being created, and the previous Pods have not been deleted at the beginning. Similarly, at this time, execute the following command to test the application status:
$ while sleep 0.1; do curl http://127.0.0.1:30486; done Host: my-app-7b4874cd75-vrlj7, Version: v1.0.0 ...... Host: my-app-7b4874cd75-vrlj7, Version: v1.0.0 Host: my-app-6b5479d97f-2fk24, Version: v2.0.0 Host: my-app-7b4874cd75-p4l8f, Version: v1.0.0 ...... Host: my-app-6b5479d97f-s5ctz, Version: v2.0.0 Host: my-app-7b4874cd75-5ldqx, Version: v1.0.0 ...... Host: my-app-6b5479d97f-5z6ww, Version: v2.0.0
We can see that the above applications are not unavailable. The first access is to version 1 applications, then occasionally to version 2 applications, until they all become version 2 applications. At this time, we can see that the Pod in the watch terminal above has all become 10 version 2 applications, and we can see that this is a gradual replacement process
If we find problems with the application of the new version during the rolling update process, we can do one-click rollback with the following commands:
$ kubectl rollout undo deploy my-app deployment.apps "my-app"
If you want to keep both versions of the application available, we can also execute the pause command to pause the update:
$ kubectl rollout pause deploy my-app deployment.apps "my-app" paused
At this point, we'll cycle through our apps to see the occasional version 1 app information.If the new version of the application is OK, you can continue to restore the updates:
$ kubectl rollout resume deploy my-app deployment.apps "my-app" resumed
Finally, you can empty the resource object above by executing the following command:
$ kubectl delete all -l app=my-app
Conclusion:
- Slow replacement of versions between instances
- rollout/rollback may take some time
- Unable to control flow
Blue/green - Best used to validate API version issues
A green release is a release of version 2 with version 1, then a traffic switch to version 2, also known as a red/black deployment.Unlike rolling updates, Blue/Green releases are deployed with Version 1 (Blue), which tests that the new version meets the requirements, then updates the Service object playing the load balancer role in Kubernetes to send traffic to the new version by replacing the version label in the label selector, as shown in the following figure
Here is an example diagram of how to apply the blue-green publishing strategy:
In Kubernetes, we can achieve blue-green publishing in two ways, through a single Service object or Ingress controller. Actual operations are similar, both controlled by label tags.
The key to blue-green publishing is the matching method of label selector tags in the Service object, such as we redefine the version 1 resource manifest file app-v1-single-svc.yaml, which contains the following:
apiVersion: v1 kind: Service metadata: name: my-app labels: app: my-app spec: type: NodePort ports: - name: http port: 80 targetPort: http # Notice here that we match the app and version tags, and when we switch traffic, we update the version tag values, such as v2.0.0 selector: app: my-app version: v1.0.0 --- apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v1 labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app version: v1.0.0 template: metadata: labels: app: my-app version: v1.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v1.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe periodSeconds: 5
Of the resource objects defined above, the most important is the definition of label selector in Service:
selector: app: my-app version: v1.0.0
Version 2 has the same application definition as before. Create a new file, app-v2-single-svc.yaml, with the following contents:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v2 labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app version: v2.0.0 template: metadata: labels: app: my-app version: v2.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v2.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe periodSeconds: 5
Then follow these steps to verify the strategy for blue/green deployment using a single Service object:
1. Version 1 Application Provision Service
2. Deployment Version 2 Application
3. Wait until version 2 application is fully deployed
4. Switch entry traffic from version 1 to version 2
5. Close version 1 Application
First, deploy version 1 applications:
$ kubectl apply -f app-v1-single-svc.yaml service "my-app" created deployment.apps "my-app-v1" created
Test version 1 application deployment success:
$ kubectl get pods -l app=my-app NAME READY STATUS RESTARTS AGE my-app-v1-7b4874cd75-7xh6s 1/1 Running 0 41s my-app-v1-7b4874cd75-dmq8f 1/1 Running 0 41s my-app-v1-7b4874cd75-t64z7 1/1 Running 0 41s $ kubectl get svc -l app=my-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-app NodePort 10.106.184.144 <none> 80:31539/TCP 50s $ curl http://127.0.0.1:31539 Host: my-app-v1-7b4874cd75-7xh6s, Version: v1.0.0
Similarly, open a new terminal and execute the following command to observe Pod changes:
watch kubectl get pod -l app=my-app
Then deploy the version 2 application:
$ kubectl apply -f app-v2-single-svc.yaml deployment.apps "my-app-v2" created
Then you can see three more Pods starting with my-app-v2 in the watch terminal above. After these Pods are successfully deployed, we will access the current application:
$ while sleep 0.1; do curl http://127.0.0.1:31539; done Host: my-app-v1-7b4874cd75-dmq8f, Version: v1.0.0 Host: my-app-v1-7b4874cd75-dmq8f, Version: v1.0.0 ......
We will find that all we access are Version 1 applications, which have nothing to do with Version 2 we just deployed, because the label selector in our Service object matches the label version=v1.0.0. We can route traffic to the Pod label version=v2.0.0 by modifying the matching label of the Service object:
$ kubectl patch service my-app -p '{"spec":{"selector":{"version":"v2.0.0"}}}' service "my-app" patched
Then go back to the app and you can see that it's all version 2 information:
$ while sleep 0.1; do curl http://127.0.0.1:31539; done Host: my-app-v2-f885c8d45-r5m6z, Version: v2.0.0 Host: my-app-v2-f885c8d45-r5m6z, Version: v2.0.0 ......
If you need to roll back to version 1, you can also simply change the matching label of the Service:
$ kubectl patch service my-app -p '{"spec":{"selector":{"version":"v1.0.0"}}}'
If the new version already meets our needs, you can delete the version 1 application:
kubectl delete deploy my-app-v1
Finally, similarly, execute the following command to clean up the above resource objects:
kubectl delete all -l app=my-app
Conclusion:
-
Real-time deployment/rollback
-
Avoid version issues because one change is a change to the entire application
-
Need twice as many resources
-
The entire application should be properly tested before being released to production
Canary - Get some users involved in the test
Canary Deployment is to allow some users access to a new version of the application. In Kubernetes, two Deployments with the same Pod tag can be used to implement Canary Deployment.A copy of the new version is released with the old version.If no errors are detected after a period of time, you can expand the number of copies of the new version and delete the application of the old version.
If you need to publish canaries by a specific percentage, you need to start as many copies of Pod as possible to calculate the traffic percentage. For example, if you want to send 1% of your traffic to version B, you need a Pod running version B and a Pod running version 99 running version A.Pod, of course, it doesn't matter if you don't care about the specific control strategies. If you need more precise control strategies, we recommend using service grids (such as Istio), which can better control traffic.
In the following example, we use Kubernetes native features for a poor Canary release, and if you want finer-grained traffic control, use the deluxe version of Istio.Here is a diagram of the application request published by the Canary
Next, we verify the Canary strategy by following these steps:
1.10 copies of Version 1 application servicing
2.Version 2 applications deploy a copy (meaning less than 10% traffic)
3. Wait long enough to confirm that Version 2 applications are stable enough without any error messages
4. Extend Version 2 applications to 10 copies
5. Wait for all instances to complete
6. Close version 1 Application
First, create a list of Version 1 application resources, app-v1-canary.yaml, as follows:
apiVersion: v1 kind: Service metadata: name: my-app labels: app: my-app spec: type: NodePort ports: - name: http port: 80 targetPort: http selector: app: my-app --- apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v1 labels: app: my-app spec: replicas: 10 selector: matchLabels: app: my-app version: v1.0.0 template: metadata: labels: app: my-app version: v1.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v1.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe periodSeconds: 5
The core part is also the label selector tag in the Service object, which is no longer labeled with version-related tags, and then defines the version 2 resource manifest file, app-v2-canary.yaml, which is as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v2 labels: app: my-app spec: replicas: 1 selector: matchLabels: app: my-app version: v2.0.0 template: metadata: labels: app: my-app version: v2.0.0 annotations: prometheus.io/scrape: "true" prometheus.io/port: "9101" spec: containers: - name: my-app image: containersol/k8s-deployment-strategies ports: - name: http containerPort: 8080 - name: probe containerPort: 8086 env: - name: VERSION value: v2.0.0 livenessProbe: httpGet: path: /live port: probe initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: /ready port: probe periodSeconds: 5
Version 1 and version 2 Pods have a common tag app=my-app, so the corresponding Service matches both versions of the Pod.
First, deploy version 1 Applications
$ kubectl apply -f app-v1-canary.yaml service "my-app" created deployment.apps "my-app-v1" created
Then test if the version 1 application is deployed correctly:
$ kubectl get svc -l app=my-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-app NodePort 10.105.133.213 <none> 80:30760/TCP 47s $ curl http://127.0.0.1:30760 Host: my-app-v1-7b4874cd75-tsh2s, Version: v1.0.0
Similarly, open a new terminal to see the changes in Pod:
$ watch kubectl get po
Then deploy the version 2 application:
$ kubectl apply -f app-v2-canary.yaml deployment.apps "my-app-v2" created
Then you can see one more Pod on the watch terminal page. Now there are 11 Pods in total. Only one Pod runs a new version of the app. Then you can also cycle through the app to see if there is any version 2 app information:
$ while sleep 0.1; do curl http://127.0.0.1:30760; done Host: my-app-v1-7b4874cd75-bhxbp, Version: v1.0.0 Host: my-app-v1-7b4874cd75-wmcqc, Version: v1.0.0 Host: my-app-v1-7b4874cd75-tsh2s, Version: v1.0.0 Host: my-app-v1-7b4874cd75-ml58j, Version: v1.0.0 Host: my-app-v1-7b4874cd75-spsdv, Version: v1.0.0 Host: my-app-v2-f885c8d45-mc2fx, Version: v2.0.0 ......
Normally you can see most of the application information returned for version 1, and occasionally the application information for version 2, which proves that our canary has been successfully released. Once you have confirmed that there are no problems with this application for version 2, you can expand your version 2 application to 10 copies:
kubectl scale --replicas=10 deploy my-app-v2 deployment.extensions "my-app-v2" scaled
In fact, when accessing the application at this time, the traffic allocation for the new and old versions is 1:1. After confirming that Version 2 is working, you can delete the application of Version 1.
$ kubectl delete deploy my-app-v1 deployment.extensions "my-app-v1" deleted
Ultimately, 10 new versions of the Pod are left behind, and our entire canary is released here.
Similarly, finally, execute the following command to delete the above resource object:
$ kubectl delete all -l app=my-app
Conclusion:
- Some users get new versions
- Facilitate error and performance monitoring
- Fast Rollback
- Slow release
- Flow precise control is wasteful (99%A / 1%B = 99 Pod A, 1 Pod B)
- If you are not confident in releasing new features, use the Canary Publishing strategy.
A/B testing - Functional testing best suited for some users
A/B testing is actually a technology for making business decisions based on statistics rather than deployment strategies and is very business-integrated.But they are also relevant and can be done with Canary Publishing.
In addition to weight-based traffic control between versions, A/B testing can accurately target a given user base based on other parameters such as cookies, User Agent s, regions, and so on. This technique is widely used to test the effects of some functional features and then determine them based on their effects.
We can often see a large number of A/B tests in today's headlines, and users in the same region see very different clients.To use these fine-grained controls, it is still recommended to use Istio, which dynamically requests routing control traffic forwarding based on weights or HTTP headers.
The following is an example of rule settings using Istio, because Istio is not stable, the following example rules may change in the future:
route: - tags: version: v1.0.0 weight: 90 - tags: version: v2.0.0 weight: 10
As for how to do A/B testing in Istio, we won't go into details here, but we do so in the istio-book documentation
Conclusion:
- Several versions run in parallel
- Full control flow distribution
- A specific access error is difficult to troubleshoot and requires distributed tracking
- Kubernetes does not have direct support and requires additional tools
summary
There are many ways to publish an application, and rebuilding or rolling updates is often a good choice when publishing to a development/test environment.In production environments, rolling updates or blue-green releases are appropriate, but early testing of new versions is necessary.If you're not confident about the new version of the app, you should use Canary publishing to minimize user impact.Finally, if your company needs to test new functionality in a specific user community, such as mobile user requests routed to version A and desktop user requests routed to version B, then you can see that using the A/B test, with the configuration of the Kubernetes service gateway, you can determine which services users should route based on certain request parameters.