After class practice: Kubernetes core concept
1. Objective overview
This paper introduces a simple K8s first-hand application, hoping that through this simple practice, we can have a deeper understanding of the core concept of K8s.
- Consolidate the basic concept of Kubernetes
- Learn to deploy a standard "multi tier" application using Kubernetes
- Learn how to describe "application" in Kubernetes through API primitives such as Pod, Deployment, Service, etc
2. Experiment overview
After completing this experiment, you can master the following abilities:
In this experiment, a CURD application named guestbook guestbook was deployed on Kubernetes cluster. Guestbook is a classic application example of Kubernetes community. It has a Web interface for users to perform CURD operations, then write data to a Redis master node, and read data from multiple Redis slave nodes.
The experiment is divided into the following steps:
- Create Redis master node
- Create Redis slave node cluster
- Create a guestbook app
- Expose and access the guestbook application through Service
- Expand guestbook application horizontally
3. Resource requirements:
A complete Kubernetes cluster. You can choose Alibaba cloud container service Kubernetes (ACK) Carry out hands-on operation.
Minikube can be used to quickly start a single node cluster (recommended in China Minikube China ), you can also use the Kubernetes cluster on the cloud. This experimental demonstration will use the Kubernetes cluster provided by Alibaba cloud container service, version 1.12.
You can use kubectl version to see that your cluster version is the same as the experimental version.
4. Experiment details
4.1 create Redis master node
Here, we use an API object called Deployment to describe the single instance Redis master node.
apiVersion: apps/v1 kind: Deployment metadata: name: redis-master labels: app: redis spec: selector: matchLabels: app: redis role: master tier: backend replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: registry.cn-hangzhou.aliyuncs.com/kubeapps/redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379
We need to save the above content as a local YAML file called redis master Deployment.YAML. This file mainly defines two things: first, the image of the container in the Pod is redis; second, the number of instances (replicas) of the Deployment is 1, which means to start a Pod.
Then, we use Kubernetes's client to do the following:
$ kubectl apply -f redis-master-deployment.yaml deployment.apps/redis-master created
After this step, Kubernetes will create the corresponding Pod for you as described in this YAML file. This usage is typical of declarative API s.
Next, we can see the Pod:
$ kubectl get pods NAME READY STATUS RESTARTS AGE redis-master-68979f4ddd-pg9cv 1/1 Running 0 49s
As you can see, pod has entered the Running state, indicating that everything is normal. At this time, we can check the Redis log in this Pod:
$ kubectl logs -f redis-master-68979f4ddd-pg9cv 1:C 26 Apr 2019 18:49:29.303 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 26 Apr 2019 18:49:29.303 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 26 Apr 2019 18:49:29.303 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 1:M 26 Apr 2019 18:49:29.304 * Running mode=standalone, port=6379. 1:M 26 Apr 2019 18:49:29.304 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1:M 26 Apr 2019 18:49:29.304 # Server initialized 1:M 26 Apr 2019 18:49:29.304 * Ready to accept connections
4.2 create Service for Redis master node
In Kubernetes, the best way to access Pod is through Service, so the client does not need to record the IP address of Pod. Our guestbook website needs to visit the Pod of the Redis master node, so it also needs to be done through Service. The definition of this Service API object is as follows:
apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend
This Service is called redis master, which declares to proxy port 6379 of Pod with its own port 6379.
Let's save the above into a file and let Kubernetes create it for us:
$ kubectl apply -f redis-master-service.yaml service/redis-master created
Then we can check the Service:
$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 181d redis-master ClusterIP 10.107.220.208 <none> 6379/TCP 9s
At this time, you can access the Redis master node through 10.107.220.208:6379.
4.3 create Redis slave node cluster
In our example, there are multiple Redis from the node to jointly respond to the read request. Similarly, we use Deployment to describe the semantics of "a service consists of multiple copies of the same Pod instance".
apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave labels: app: redis spec: selector: matchLabels: app: redis role: slave tier: backend replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: registry.cn-hangzhou.aliyuncs.com/kubeapps/gb-redisslave:v1 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: env ports: - containerPort: 6379
In this Deployment, we specify replica: 2, that is, the Deployment will start two identical pods (Redis slave node).
In addition, the image GB redissell: V1 will automatically read the value of the environment variable Redis ﹣ master ﹣ Service ﹣ host, that is, the Service address of the Redis master node, and then use it to build a cluster. This environment variable is automatically injected into every Pod of the cluster by Kubernetes according to the Service name of Redis master.
Then, we create a Redis slave node:
$ kubectl apply -f redis-slave-deployment.yaml deployment.apps/redis-slave created
At this time, we can view the status of these slave nodes:
$ kubectl get pods NAME READY STATUS RESTARTS AGE redis-master-68979f4ddd-pg9cv 1/1 Running 0 17m redis-slave-78b464f5cd-2kn7w 0/1 ContainerCreating 0 37s redis-slave-78b464f5cd-582bk 0/1 ContainerCreating 0 37s
4.4 create Service for Redis slave node
Similarly, in order for the guestbook application to access the above Redis slave nodes, we need to create a Service for them. In Kubernetes, the Service can select multiple pods through the selector, and is responsible for load balancing. The Service content is as follows:
apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend
Create and view the Service (Note: port 6379 uses simplified writing method, so there is no need to write the targetPort):
$ kubectl apply -f redis-slave-svc.yaml service/redis-slave created $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 181d redis-master ClusterIP 10.107.220.208 <none> 6379/TCP 16m redis-slave ClusterIP 10.101.244.239 <none> 6379/TCP 57s
In this way, you can access any Redis slave node through 10.10.101.244:6379.
4.5 creating a guestbook application
The guestbook application itself is still described by a Deployment, as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: guestbook spec: selector: matchLabels: app: guestbook tier: frontend replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: registry.cn-hangzhou.aliyuncs.com/kubeapps/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: env ports: - containerPort: 80
This YAML defines a Deployment of 3 copies, that is, the guestbook application will start 3 pods.
We still create the Deployment through the same steps:
$ kubectl apply -f frontend.yaml deployment.apps/frontend created
To view the status of a Pod:
$ kubectl get pods -l app=guestbook -l tier=frontend NAME READY STATUS RESTARTS AGE frontend-78d6c59f4-2x24x 1/1 Running 0 3m4s frontend-78d6c59f4-7mz87 1/1 Running 0 3m4s frontend-78d6c59f4-sw7f2 1/1 Running 0 3m4s
4.6 create Service for guestbook application
In order to enable users to access the guestbook, we also need to create a Service for the guestbook, so as to expose the application to users in the form of a Service.
In order to enable users outside the Kubernetes cluster, this Service must be an external accessible Service. There are several ways to do this in Kubernetes. The most common pattern on the cloud is the LoadBalancer pattern.
apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # Self built cluster can only use NodePort mode # type: NodePort type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend
Because my cluster is provided by Alibaba cloud container service, I can directly use the LoadBalancer mode as above.
$ kubectl apply -f frontend-service.yaml
$ kubectl get service frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend ClusterIP 172.19.10.209 22.214.171.124 80:32372/TCP 1m
Now, all you have to do is open the address of "EXTERNAL-IP" in the browser: http://126.96.36.199:31323 , you can access the deployed guestbook application.
If you are a self built cluster, you can only use the NodePort mode to experiment (the above YAML comments have given the use method). It should be noted that NodePort is not recommended for production due to security issues.
4.7 expand guestbook application horizontally
To expand your application horizontally to respond to more requests through Kubernetes is very simple, just one command:
$ kubectl scale deployment frontend --replicas=5 deployment.extensions/frontend scaled
You will immediately see that the number of instances of your guestbook application has changed from 3 to 5:
$ kubectl get pods -l app=guestbook -l tier=frontend NAME READY STATUS RESTARTS AGE frontend-78d6c59f4-2x24x 1/1 Running 0 14m frontend-78d6c59f4-7mz87 1/1 Running 0 14m frontend-78d6c59f4-chxwd 1/1 Running 0 19s frontend-78d6c59f4-jrvfx 1/1 Running 0 19s frontend-78d6c59f4-sw7f2 1/1 Running 0 14m
This article was written by Alibaba cloud container cloud platform team. If you have any questions or want to reprint it, please contact us. Thank you!
This is the original content of yunqi community, which can not be reproduced without permission.