Do you know what Canary distribution is? Do you know how to implement it in the Service Mesh microservice architecture?

What is Canary releaseSince we want to talk about the specific implementation, before we start, what is the "Canary...

What is Canary release

Since we want to talk about the specific implementation, before we start, what is the "Canary release" under xiankepu. Canary release is also called "gray release". Specifically, when releasing the online version, first hit a small amount of production traffic to the new version of the service to verify the accuracy and reliability of the new version. After the new version is fully verified by the online traffic, gradually put all traffic into the new version to realize the stable update of the production service version.

Why is it called Canary release? Because canary is sensitive to toxic gas in the mine, workers will put a canary in before the mine starts to verify whether there is toxic gas in the mine. This is the origin of Canary release name.

In different technology stack scenarios, Canary release is implemented in different ways: some through nginx and some with the help of A/B testing. With the popularity of cloud native infrastructure represented by Kubernetes, Canary publishing, as a basic service publishing function, has some new trends in its implementation method - that is, it is gradually integrated with cloud native infrastructure and become a part of infrastructure services.

Canary (grayscale) release in Kubernetes

Next, let's take a look at how version updates are implemented in Kubernetes. The following content assumes that you already have a set of available Kubernetes environment. If you don't have a link to the article recommended at the end of the article, please refer to the relevant sharing and self deployment.

1. Rolling update

Before introducing the Canary (grayscale) release in Kubernetes, let's learn about the most important application deployment mode in Kubernetes - "rolling upgrade".

The so-called "rolling upgrade" means that after updating the Pod template of Deployment arrangement resources in Kubernetes (for example, updating the image version number), Deployment needs to follow a method called "rolling update" to upgrade the existing container, so as to realize the "uninterrupted update Deployment" of external services. The schematic diagram of Kubernetes realizing "rolling upgrade" is as follows:

As shown in the figure above, the rolling upgrade process is as follows:

1) When the container starts to upgrade, a new version of Pod will be started in the cluster and an old version of Pod will be terminated.

2) If the new version of Pod fails to start at this time, the "rolling upgrade" will stop and allow developers and operation and maintenance personnel to intervene. In this process, because the application itself has two old versions of Pod online, the service will not be greatly affected.

3) If the new version of Pod is started successfully and the service access is normal, continue the rolling upgrade until the subsequent old version of Pod is upgraded according to the number of copies set by the Deployment choreographer.

In Kubernetes, Deployment can also control the rolling upgrade behavior of Pod through the corresponding "rolling upgrade" strategy to further ensure the continuity of service. For example: "in any time window, only the Pod with the specified proportion is offline; in any time window, only the new Pod with the specified proportion is created". It can be set through corresponding control parameters, as follows:

... spec: selector: matchLabels: app: micro-api replicas: 3 #Set rolling upgrade policy #Kubernetes waits for the set time before starting the upgrade, for example, 5 seconds minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: #During the upgrade process, the maximum number of pods can be more than the original setting maxSurge: 1 #How many old pods can be deleted by the Deployment controller during the upgrade process? It is mainly used to provide buffer time maxUnavailable: 1 ...

In the configuration of rollingupdate strategy above:

  • maxSurge: specifies how many new pods the Deployment controller can create in one "scroll" in addition to the set number of Pod copies.
  • maxUnavailable: refers to how many old pods can be deleted by the Deployment controller in one "scroll".

Through this precise "rolling upgrade" strategy, the release process of Kubernetes service version can be smoother. In addition, these two configurations can also be expressed in percentage, such as "maxUnavailable=50%", which means that the Deployment controller can delete "50% * set the number of Pod copies" at most at one time.

Next, we will demonstrate the detailed process of service rolling upgrade in Kubernetes.

Description of the sample code used:

The project focuses on Java services written by Spring Boot, which is closer to the real project development scenario in experience. The structure of the project is as follows:

The GitHub address where the project is located is:

https://github.com/manongwudi/istio-micro-service-demo

Rolling upgrade Demo:

Here, we first demonstrate the process of "rolling upgrade" in Kubernetes with the help of the "micro API" service in the example project. The steps are as follows:

(1) First, prepare the k8s release file of "micro API" service (e.g. micro API. Yaml). The code is as follows:

apiVersion: v1 kind: Service metadata: name: micro-api spec: type: ClusterIP ports: - name: http port: 19090 targetPort: 9090 selector: app: micro-api --- apiVersion: apps/v1 kind: Deployment metadata: name: micro-api spec: selector: matchLabels: app: micro-api replicas: 3 #Set rolling upgrade policy #Kubernetes waits for the set time before starting the upgrade, for example, 5 seconds minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: #During the upgrade process, the maximum number of pods can be more than the original setting maxSurge: 1 #How many old pods can the Deployment controller delete during the upgrade process maxUnavailable: 1 template: metadata: labels: app: micro-api spec: #Set the secret of alicloud private image warehouse login information (corresponding to the settings in 2.1.2) imagePullSecrets: - name: regcred containers: - name: micro-api image: registry.cn-hangzhou.aliyuncs.com/wudimanong/micro-api:1.0-SNAPSHOT imagePullPolicy: Always tty: true ports: - name: http protocol: TCP containerPort: 19090

The above deployment file sets the number of Pod copies of the "micro API" service to "3", and sets the corresponding rolling upgrade strategy.

(2) Next, execute the k8s deployment command as follows:

$ kubectl apply -f micro-api.yaml

After the Deployment is created successfully, view the status information of the Deployment. The command effect is as follows:

$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE micro-api 3/3 3 3 190d

From the returned result of the above command, you can see three status fields. Their meanings are as follows:

  • READY: indicates the number of Pod copies expected by the user and the number of pods currently in Running status.
  • UP-TO-DATE: the number of pods currently in the latest version. The so-called latest version means that the Spec part of the Pod is completely consistent with that defined in the Pod template in Deployment.
  • AVAILABLE: the number of currently AVAILABLE Pod s - the number of pods in Running status, the latest version, and Ready status.

(3) Simulate service version upgrade and trigger rolling upgrade.

Next, rebuild the version of the "micro API" service and upload it to the private image warehouse. Then, modify the image used by the Deployment of the "micro API" through the command and trigger the rolling upgrade.

The commands for modifying the image used by Deployment are as follows:

$ kubectl set image deployment/micro-api micro-api=registry.cn-hangzhou.aliyuncs.com/wudimanong/micro-api:1.1-SNAPSHOT deployment.apps/micro-api image updated

The "kubectl set image" instruction is used here to facilitate operation. You can also modify the image version directly in the k8s deployment file.

After modifying the image version of Deployment, Kubernetes will immediately trigger the "rolling upgrade" process. You can view the status changes of Deployment resources through the "kubectl rollout status" command. The details are as follows:

$ kubectl rollout status deployment/micro-api Waiting for deployment "micro-api" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "micro-api" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "micro-api" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "micro-api" rollout to finish: 2 of 3 updated replicas are available... Waiting for deployment "micro-api" rollout to finish: 2 of 3 updated replicas are available... deployment "micro-api" successfully rolled out

At this time, you can also view the "rolling upgrade" process by viewing the Deployment Events. The details are as follows:

$ kubectl describe deployment micro-api ... OldReplicaSets: <none> NewReplicaSet: micro-api-d745d8649 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set micro-api-677dd4d5b6 to 1 Normal ScalingReplicaSet 12m deployment-controller Scaled down replica set micro-api-57c7cb5b74 to 2 Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set micro-api-677dd4d5b6 to 2 Normal ScalingReplicaSet 5m1s deployment-controller Scaled down replica set micro-api-677dd4d5b6 to 0 Normal ScalingReplicaSet 5m deployment-controller Scaled up replica set micro-api-d745d8649 to 2 Normal ScalingReplicaSet 56s deployment-controller Scaled down replica set micro-api-57c7cb5b74 to 0 Normal ScalingReplicaSet 56s deployment-controller Scaled up replica set micro-api-d745d8649 to 3

It can be seen that after you modify the Pod definition in Deployment, the "Deployment Controller" will use the modified Pod template to create a new ReplicaSet. The initial number of Pod replicas of the new ReplicaSet is 0.

Then, at the position of Age=12 m, start to change the number of Pod replicas controlled by the new ReplicaSet from 0 to 1.

Then, at the position of Age=12 m, the number of Pod copies controlled by the old ReplicaSet is reduced by 1, that is, the "horizontal shrinkage" is divided into two copies.

In this way, the number of pods managed by the new ReplicaSet changes from 0 to 1, then to 2, and finally to 3; The number of replicas of pods managed by the old ReplicaSet changed from 3 to 2, and finally to 0.

In this way, the version upgrade process of a group of pods is completed. In this way, the process of upgrading multiple Pod versions running in a Kubernetes cluster alternately one by one is "rolling upgrade".

2. Canary (grayscale) release

In the previous "1." subtitle, Kubernetes's "rolling upgrade" method is demonstrated in detail. Although the rolling upgrade method can facilitate and smooth version update, this process has no gray function. Although there is a buffer alternating process in the way of rolling upgrade, this process is automatic and rapid. The end of the rolling upgrade process is equivalent to the direct full release of the new version.

For scenes requiring Canary (gray) release, the "rolling upgrade" method is obviously not enough. So, how should the version update be combined with the release of Canary (gray) in Kubernetes?

The specific steps are as follows:

(1) Write the deployment file to realize the gray release of the new version.

In order to realize the observability of Canary (grayscale) publishing process in Kubernetes, we redefine the specific k8s publishing file (such as micro API canary. Yaml) as follows:

apiVersion: apps/v1 kind: Deployment metadata: name: micro-api spec: selector: matchLabels: app: micro-api replicas: 3 #Set rolling upgrade policy #Kubernetes waits for the set time before starting the upgrade, for example, 5 seconds minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: #During the upgrade process, the maximum number of pods can be more than the original setting maxSurge: 1 #How many old pods can be deleted by the Deployment controller during the upgrade process? It is mainly used to provide buffer time maxUnavailable: 1 template: metadata: labels: app: micro-api #Add new label (grayscale release of demo k8s) track: canary spec: #Set the secret of alicloud private image warehouse login information (corresponding to the settings in 2.1.2) imagePullSecrets: - name: regcred containers: - name: micro-api image: registry.cn-hangzhou.aliyuncs.com/wudimanong/micro-api:1.3-SNAPSHOT imagePullPolicy: Always tty: true ports: - name: http protocol: TCP containerPort: 19090

The content of the above release file is the same as that of the release file when rolling upgrade is demonstrated in the subtitle "1." just to facilitate the observation of the implementation of gray-scale release process, here mark the newly released Pod version through "track: canary".

Set the image of the new version to "micro API: 1.3-snapshot". And match the Service (the Service defined in the micro api.yaml file) resource definition corresponding to the historical version Pod through "spec.selector.matchlabels.app: Micro API".

(2) Execute the "rolling upgrade" release command to achieve the "gray release" effect.

$ kubectl apply -f micro-api-canary.yaml && kubectl rollout pause deployment/micro-api

The Deployment Canary (grayscale release) is realized through the "kubectl rollout pause" command above. The operation effect after issuing the command is as follows:

$ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS micro-api-57c7cb5b74-mq7m9 1/1 Running 0 6m20s 10.32.0.3 kubernetes <none> <none> app=micro-api,pod-template-hash=57c7cb5b74 micro-api-57c7cb5b74-ptptj 1/1 Running 0 6m20s 10.32.0.4 kubernetes <none> <none> app=micro-api,pod-template-hash=57c7cb5b74 micro-api-7dbb6c5d66-4rbdc 1/1 Running 0 5m33s 10.32.0.6 kubernetes <none> <none> app=micro-api,pod-template-hash=7dbb6c5d66,track=canary micro-api-7dbb6c5d66-cfk9l 1/1 Running 0 5m33s 10.32.0.5 kubernetes <none> <none> app=micro-api,pod-template-hash=7dbb6c5d66,track=canary

View the rolling upgrade of Deployment. The command is as follows:

$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE micro-api 4/3 2 4 194d

You can see that the number of "micro API" ready is 4, including two old versions of Pod and two new versions of Pod.

(3) Next, conduct flow test.

Query the IP of Service resources corresponding to two groups of Pod versions. The command is as follows:

# kubectl get svc micro-api NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE micro-api ClusterIP 10.110.169.161 <none> 19090/TCP 194d

Next, simulate batch access to the service interface. The command is as follows:

$ for i in ; do curl 10.110.169.161:19090/test/test; done {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"}

You can see that the traffic will flow randomly to the old version and the new version (the log is marked V3).

(4) Upgrade the service version to the new version.

If the new version of the service is verified to be OK through the online traffic test, you can upgrade the overall service version to the new version through the "rollback resume" command. The command is as follows:

$ kubectl rollout resume deployment micro-api deployment.apps/micro-api resumed

The effects after upgrading are as follows:

$ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS micro-api-7dbb6c5d66-4rbdc 1/1 Running 0 18m 10.32.0.6 kubernetes <none> <none> app=micro-api,pod-template-hash=7dbb6c5d66,track=canary micro-api-7dbb6c5d66-bpjtg 1/1 Running 0 84s 10.32.0.3 kubernetes <none> <none> app=micro-api,pod-template-hash=7dbb6c5d66,track=canary micro-api-7dbb6c5d66-cfk9l 1/1 Running 0 18m 10.32.0.5 kubernetes <none> <none> app=micro-api,pod-template-hash=7dbb6c5d66,track=canary

You can see that the target service has completed the full update through "rolling upgrade". If there is a problem, roll back with the "kubectl rollback undo" command!

It can be seen from the above process that Canary (gray Publishing) in Kubernetes is mainly realized by manipulating (e.g. pause) "rolling upgrade" - publishing a certain number of new version pods and using the load balancing ability of Service resource type to realize the random alternation of traffic between new / old pods.

Although this method can meet some simple scenes, there is no way to achieve more accurate gray flow control. At this time, we need to use the solution in Service Mesh. Let's take a look at how to achieve the Canary (gray) release of accurate traffic in Istio.

Canary (grayscale) release in Istio

Istio and Kubernetes implement Canary (gray) publishing in different ways. Istio can flexibly control the traffic proportion of the corresponding version through the powerful routing rule management ability of Envoy (SideCar), so as to realize the Canary (gray) publishing function with accurate traffic control ability.

Istio realizes the traffic routing released by Canary (gray scale) through Envoy(SideCar), as shown below (continue to take the "micro API" service as an example):

It can be roughly seen from the above figure that Istio has strong traffic management capability, which is natural for Canary (gray) publishing function to achieve accurate traffic control.

Specifically, in Istio, traffic routing is implemented in the service grid through specific resources such as virtual service. VirtualService can easily define traffic routing rules, apply these rules when the client tries to connect to the service, and finally reach the target service.

Next, we will demonstrate how to realize Canary (grayscale) publishing through VirtualService in Istio. The steps are as follows:

(1) First release a v1 version of the service.

To achieve more accurate version control in Istio, you need to specify it through a clear "version label" when publishing Pod resources. Prepare the k8s deployment file of "micro API" service V1 Version (micro API Canary Istio v1. Yaml):

apiVersion: v1 kind: Service metadata: name: micro-api spec: type: ClusterIP ports: - name: http port: 19090 targetPort: 9090 selector: app: micro-api --- apiVersion: apps/v1 kind: Deployment meta data: name: micro-api-v1 spec: selector: matchLabels: app: micro-api #Here is the key. You need to set the version label in order to realize gray publishing version: v1 replicas: 3 #Set rolling upgrade policy #Kubernetes waits for the set time before starting the upgrade, for example, 5 seconds minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: #During the upgrade process, the maximum number of pods can be more than the original setting maxSurge: 1 #How many old pods can be deleted by the Deployment controller during the upgrade process? It is mainly used to provide buffer time maxUnavailable: 1 template: metadata: labels: app: micro-api #Set the version label to facilitate grayscale publishing version: v1 spec: #Set the secret of alicloud private image warehouse login information imagePullSecrets: - name: regcred containers: - name: micro-api image: registry.cn-hangzhou.aliyuncs.com/wudimanong/micro-api:1.1-SNAPSHOT imagePullPolicy: Always tty: true ports: - name: http protocol: TCP containerPort: 19090

The "spec.selector.matchLabels.version:v1" tag is used to mark the service version. This tag is the main basis for identifying the service version in the subsequent Istio traffic management rules.

When you are ready to publish the file, execute the publish command:

$ kubectl apply -f micro-api-canary-istio-v1.yaml

At this point, a lower version of the service runs successfully! Next, we simulate Canary (grayscale) release.

(2) Publish a v2 version of the service (the target version of the upgrade).

Like the v1 version service, the published V2 version service also needs to specify the version label. The contents of its release file (micro API Canary istio v2. Yaml) are as follows:

apiVersion: apps/v1 kind: Deployment metadata: name: micro-api-v2 spec: selector: matchLabels: app: micro-api #Set the version label to facilitate grayscale publishing version: v2 replicas: 3 #Set rolling upgrade policy #Kubernetes waits for the set time before starting the upgrade, for example, 5 seconds minReadySeconds: 5 strategy: type: RollingUpdate rollingUpdate: #During the upgrade process, the maximum number of pods can be more than the original setting maxSurge: 1 #How many old pods can be deleted by the Deployment controller during the upgrade process? It is mainly used to provide buffer time maxUnavailable: 1 template: metadata: labels: app: micro-api #Set the version label to facilitate grayscale publishing version: v2 spec: #Set the secret of alicloud private image warehouse login information imagePullSecrets: - name: regcred containers: - name: micro-api image: registry.cn-hangzhou.aliyuncs.com/wudimanong/micro-api:1.3-SNAPSHOT imagePullPolicy: Always tty: true ports: - name: http protocol: TCP containerPort: 19090

Execute the publish command:

$ kubectl apply -f micro-api-canary-istio-v2.yaml deployment.apps/micro-api-v2 created

At this time, there are two groups of Pod resources in the system, as follows:

# kubectl get pods NAME READY STATUS RESTARTS AGE micro-api-v1-565d749dd4-7c66z 1/1 Running 2 13h micro-api-v1-565d749dd4-7dqfb 1/1 Running 2 13h micro-api-v1-565d749dd4-l62wc 1/1 Running 2 13h micro-api-v2-6f98c598c9-5stlw 1/1 Running 0 82s micro-api-v2-6f98c598c9-f2ntq 1/1 Running 0 82s micro-api-v2-6f98c598c9-l8g4j 1/1 Running 0 82s

Next, we will demonstrate how to use Istio's powerful traffic management function to realize the accurate control of traffic between these two sets of Pod resources!

(3) Create Istio gateway resource.

In Istio, to control the traffic accurately, you need to bind the VirtualService to the specific ingress gateway resource. Therefore, before creating VirtualService resources to realize traffic routing and control, you need to create an Istio gateway. The contents of the deployment file (micro gateway. Yaml) are as follows:

apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: micro-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*"

After the above deployment file is executed, an Istio gateway named "micro gateway" will be created and all hosts (specified by hosts: "*") will be allowed to pass through the gateway.

(4) Create Istio virtual service resource VirtualService.

As mentioned earlier, in Istio, virtual service is mainly used to realize traffic routing and control in the service grid. Next, let's take a look at the specific creation method of VirtualService resource and prepare the resource file (such as virtual-service-all.yaml), which is as follows:

apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: micro-api-route spec: #It is used to define the target host to which traffic is sent (here is the micro API service deployed in k8s) hosts: - micro-api.default.svc.cluster.local #Bind the VirtualService to the Istio gateway and expose the routing target through the gateway gateways: - micro-gateway http: - route: #Set the traffic proportion of the old version (V1) to 70% - destination: host: micro-api.default.svc.cluster.local subset: v1 #Set the flow proportion through the weight value weight: 70 #Set the traffic proportion of the new version (V2) to 30% - destination: host: micro-api.default.svc.cluster.local subset: v2 weight: 30

As shown above, the VirtualService resource has the ability of accurate traffic control for http, and can route the traffic with a specified proportion to the version specified by a specific "subset". In order to achieve this capability, the VirtualService resource also needs to be bound with the Istio gateway to expose the routing target through the Istio gateway.

(5) Create Istio target routing rule resource.

Virtual service VirtualService in Istio is mainly used to control traffic behavior, and the routing rules that define traffic behavior need to be defined through the "DestinationRule" routing rule resource. Create a routing rule file (destination rule all. Yaml), as follows:

apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: micro-api-destination spec: #Associated with the Service resource name corresponding to the Deployment resource host: micro-api #Traffic policy settings: load balancing policy, connection pool size, local exception detection, etc. act on traffic after routing occurs trafficPolicy: #Current limiting strategy connectionPool: tcp: maxConnections: 10 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 #Debt balancing algorithm for setting destination loadBalancer: simple: ROUND_ROBIN #Destinations refer to different subsets or service versions. A subset can identify different versions of an application to switch traffic between different service versions subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2

As shown above, the specific version label matching information of the VirtualService resource for routing is defined through the subsets attribute. So far, the gray flow control rules for the two versions of services have been set. Next, test the specific Canary (gray) release effect.

(6) Test the flow control effect of Istio to achieve Canary (gray) release.

Before the formal test, you can view the current deployment resources through the command:

#View deployed Deployment resources kubectl get deploy | grep micro-api micro-api-v1 3/3 3 3 21h micro-api-v2 3/3 3 3 8h
#View the service IP of k8s service corresponding to two groups of version Pod resources kubectl get svc micro-api NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE micro-api ClusterIP 10.110.169.161 <none> 19090/TCP 205d
#View VirtualService resource definition kubectl get vs NAME GATEWAYS HOSTS AGE micro-api-route [micro-gateway] [micro-api.default.svc.cluster.local] 7h34m
#View defined routing rule resources kubectl get dr NAME HOST AGE micro-api-destination micro-api 7h27m

Through the above resource information, we can already find the IP of the k8s service resource corresponding to Deployments. However, if we test through the k8s service resource, we will find that the flow control is not accurate and can not reach the 70% flow direction v1 and 30% flow direction v2 (because this is random flow).

Therefore, to use Istio's precise flow control function, you also need to use Istio's Ingressgateway. The command to view Istio's ingresgateway resource IP is as follows:

#View IP address of ingress kubectl get svc -n istio-system | grep ingress istio-ingressgateway LoadBalancer 10.98.178.61 <pending> 15021:31310/TCP,80:32113/TCP,443:31647/TCP,31400:30745/TCP,15443:30884/TCP 7h54m

Next, access the "micro API" service through the IP of Ingress. The commands and effects are as follows:

# for i in ; do curl -H "Host:micro-api.default.svc.cluster.local" 10.98.178.61:80/test/test; done {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"}

As shown above, the flow is divided according to the set proportion (v1:70%;v2:30%).

(7) The test will switch all traffic to the new version.

In order to more clearly verify the flow control effect of Istio, next, we change the flow setting proportion of VirtualService resources to switch all the traffic to the new version. The configuration file of the changed VirtualService resource is as follows:

apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: micro-api-route spec: #It is used to define the target host to which traffic is sent (here is the micro API service deployed in k8s) hosts: - micro-api.default.svc.cluster.local #Bind the VirtualService to the Istio gateway and expose the routing target through the gateway gateways: - micro-gateway http: - route: #Set the traffic proportion of the old version (V1) to 70% - destination: host: micro-api.default.svc.cluster.local subset: v1 #Set the flow proportion through the weight value weight: 0 #Set the traffic proportion of the new version (V2) to 30% - destination: host: micro-api.default.svc.cluster.local subset: v2 weight: 100

Continue to access the target service through Istio gateway. The command is as follows:

# for i in ; do curl -H "Host:micro-api.default.svc.cluster.local" 10.98.178.61:80/test/test; done {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"} {"code":0,"data":"V3|Independent test interface return->OK!","message":"success"}

It can be observed that all traffic has been switched to the new version of service at this time!

Postscript

In the era of microservices, different services are interconnected and have complex relationships. Deploying and upgrading one service may paralyze the whole system. Therefore, it is necessary to select an appropriate deployment method to minimize the risk. Canary (grayscale) publishing is only one of a variety of deployment methods, as well as blue-green deployment and rolling deployment (such as the rolling upgrade of K8s). Different publishing forms can be selected according to different business scenarios.

11 November 2021, 15:58 | Views: 5084

Add new comment

For adding a comment, please log in
or create account

0 comments