Kubernetes detailed tutorial -- detailed explanation of Service

7. Detailed explanation 7.1 Service introduction ...

7. Detailed explanation

7.1 Service introduction

In kubernetes, pod is the carrier of application program. We can access the application program through the ip of pod, but the ip address of pod is not fixed, which means that it is not convenient to directly use the ip of pod to access the service.

To solve this problem, kubernetes provides service resources. Service aggregates multiple pods that provide the same service and provides a unified entry address. You can access the following pod services by accessing the service entry address.

In many cases, service is just a concept. What really works is the Kube proxy service process. A Kube proxy service process is running on each Node. When creating a service, it will write the information of the created service to etcd through API server, and Kube proxy will find the changes of this service based on the listening mechanism, and then it will convert the latest service information into the corresponding access rules.

# 10.97.97.97:80 is the access portal provided by the service # When accessing this portal, you can find that there are three pod services waiting to be called, # Kube proxy distributes requests to one of the pod s based on rr (polling) policy # This rule will be generated on all nodes in the cluster at the same time, so it can be accessed on any node. [root@node1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.97.97.97:80 rr -> 10.244.1.39:80 Masq 1 0 0 -> 10.244.1.40:80 Masq 1 0 0 -> 10.244.2.33:80 Masq 1 0 0

Kube proxy currently supports three working modes:

7.1.1 userspace mode

In userspace mode, Kube proxy will create a listening port for each Service. The request to Cluster IP will be redirected to the listening port of Kube proxy by Iptables rules. Kube proxy will select a Pod providing services according to LB algorithm and establish a link with it to forward the request to Pod. In this mode, Kube proxy acts as a four layer equalizer. Because Kube proxy runs in userspace, the data copy between kernel and user space will be increased during forwarding. Although it is relatively stable, it is inefficient.

7.1.2 iptables mode

In iptables mode, Kube proxy creates corresponding iptables rules for each Pod on the back end of the service, and directly redirects the request to the Cluster IP to a Pod IP. In this mode, Kube proxy does not assume the role of four-layer equalizer, but is only responsible for creating iptables rules. The advantage of this mode is that it is more efficient than the userspace mode, but it cannot provide a flexible LB strategy, and it cannot retry when the back-end Pod is unavailable.

7.1.3 ipvs mode

ipvs mode is similar to iptables. Kube proxy monitors Pod changes and creates corresponding ipvs rules. ipvs is more efficient than iptables. In addition, ipvs supports more LB algorithms.

# ipvs kernel module must be installed in this mode, otherwise it will be degraded to iptables # Turn on ipvs [root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system # Modify mode: "ipvs" [root@k8s-master01 ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system [root@node1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.97.97.97:80 rr -> 10.244.1.39:80 Masq 1 0 0 -> 10.244.1.40:80 Masq 1 0 0 -> 10.244.2.33:80 Masq 1 0 0
7.2 Service type

Resource manifest file for Service:

kind: Service # Resource type apiVersion: v1 # Resource version metadata: # metadata name: service # Resource name namespace: dev # Namespace spec: # describe selector: # Tag selector, which is used to determine which pod s are represented by the current service app: nginx type: # Service type, specifying the access method of the service clusterIP: # ip address of virtual service sessionAffinity: # session affinity supports ClientIP and None options ports: # port information - protocol: TCP port: 3017 # service port targetPort: 5003 # pod port nodePort: 31122 # Host port
  • ClusterIP: the default value. It is the virtual IP automatically assigned by Kubernetes system and can only be accessed inside the cluster
  • NodePort: expose the Service to the outside through the port on the specified Node. Through this method, you can access the Service outside the cluster
  • LoadBalancer: use the external load balancer to complete the load distribution to the service. Note that this mode needs the support of external cloud environment
  • ExternalName: introduce services outside the cluster into the cluster and use them directly
7.3 Service usage 7.3.1 preparation of experimental environment

Before using the service, first create three pods with Deployment. Note that the tag app = nginx pod should be set for the pod

Create deployment.yaml as follows:

apiVersion: apps/v1 kind: Deployment metadata: name: pc-deployment namespace: dev spec: replicas: 3 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.17.1 ports: - containerPort: 80
[root@k8s-master01 ~]# kubectl create -f deployment.yaml deployment.apps/pc-deployment created # View pod details [root@k8s-master01 ~]# kubectl get pods -n dev -o wide --show-labels NAME READY STATUS IP NODE LABELS pc-deployment-66cb59b984-8p84h 1/1 Running 10.244.1.39 node1 app=nginx-pod pc-deployment-66cb59b984-vx8vx 1/1 Running 10.244.2.33 node2 app=nginx-pod pc-deployment-66cb59b984-wnncx 1/1 Running 10.244.1.40 node1 app=nginx-pod # In order to facilitate the following tests, modify the index.html page of the next three nginx (the modified IP addresses of the three are inconsistent) # kubectl exec -it pc-deployment-66cb59b984-8p84h -n dev /bin/sh # echo "10.244.1.39" > /usr/share/nginx/html/index.html #After modification, access the test [root@k8s-master01 ~]# curl 10.244.1.39 10.244.1.39 [root@k8s-master01 ~]# curl 10.244.2.33 10.244.2.33 [root@k8s-master01 ~]# curl 10.244.1.40 10.244.1.40
7.3.2 ClusterIP type Service

Create the service-clusterip.yaml file

apiVersion: v1 kind: Service metadata: name: service-clusterip namespace: dev spec: selector: app: nginx-pod clusterIP: 10.97.97.97 # The ip address of the service. If it is not written, it will be generated by default type: ClusterIP ports: - port: 80 # Service port targetPort: 80 # pod port
# Create service [root@k8s-master01 ~]# kubectl create -f service-clusterip.yaml service/service-clusterip created # View service [root@k8s-master01 ~]# kubectl get svc -n dev -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service-clusterip ClusterIP 10.97.97.97 <none> 80/TCP 13s app=nginx-pod # View service details # Here is a list of Endpoints, which is the service entry that the current service can load [root@k8s-master01 ~]# kubectl describe svc service-clusterip -n dev Name: service-clusterip Namespace: dev Labels: <none> Annotations: <none> Selector: app=nginx-pod Type: ClusterIP IP: 10.97.97.97 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.39:80,10.244.1.40:80,10.244.2.33:80 Session Affinity: None Events: <none> # View mapping rules for ipvs [root@k8s-master01 ~]# ipvsadm -Ln TCP 10.97.97.97:80 rr -> 10.244.1.39:80 Masq 1 0 0 -> 10.244.1.40:80 Masq 1 0 0 -> 10.244.2.33:80 Masq 1 0 0 # Visit 10.97.97.97:80 to observe the effect [root@k8s-master01 ~]# curl 10.97.97.97:80 10.244.2.33
7.3.3 Endpoint

Endpoint is a resource object in kubernetes. It is stored in etcd and used to record the access addresses of all pod s corresponding to a service. It is generated according to the selector description in the service configuration file.

A service consists of a group of pods, which are exposed through endpoints, which are the collection of endpoints that implement the actual service. In other words, the connection between service and pod is realized through endpoints.

Load distribution policy

Access to the Service is distributed to the backend Pod. At present, kubernetes provides two load distribution strategies:

  • If it is not defined, the Kube proxy policy is used by default, such as random and polling

  • Session persistence mode based on client address, that is, all requests from the same client will be forwarded to a fixed Pod

    This mode enables you to add the sessionAffinity:ClientIP option to the spec

# View the mapping rules of ipvs [rr polling] [root@k8s-master01 ~]# ipvsadm -Ln TCP 10.97.97.97:80 rr -> 10.244.1.39:80 Masq 1 0 0 -> 10.244.1.40:80 Masq 1 0 0 -> 10.244.2.33:80 Masq 1 0 0 # Cyclic access test [root@k8s-master01 ~]# while true;do curl 10.97.97.97:80; sleep 5; done; 10.244.1.40 10.244.1.39 10.244.2.33 10.244.1.40 10.244.1.39 10.244.2.33 # Modify distribution policy - sessionAffinity:ClientIP # View ipvs rules [persistent stands for persistent] [root@k8s-master01 ~]# ipvsadm -Ln TCP 10.97.97.97:80 rr persistent 10800 -> 10.244.1.39:80 Masq 1 0 0 -> 10.244.1.40:80 Masq 1 0 0 -> 10.244.2.33:80 Masq 1 0 0 # Cyclic access test [root@k8s-master01 ~]# while true;do curl 10.97.97.97; sleep 5; done; 10.244.2.33 10.244.2.33 10.244.2.33 # Delete service [root@k8s-master01 ~]# kubectl delete -f service-clusterip.yaml service "service-clusterip" deleted
7.3.4 Service of headliner type

In some scenarios, developers may not want to use the load balancing function provided by the service, but want to control the load balancing policy. In this case, kubernetes provides a headlines service, which does not allocate Cluster IP. If they want to access the service, they can only query through the domain name of the service.

Create service-headline.yaml

apiVersion: v1 kind: Service metadata: name: service-headliness namespace: dev spec: selector: app: nginx-pod clusterIP: None # Set clusterIP to None to create the headline service type: ClusterIP ports: - port: 80 targetPort: 80
# Create service [root@k8s-master01 ~]# kubectl create -f service-headliness.yaml service/service-headliness created # Get the service and find that CLUSTER-IP is not allocated [root@k8s-master01 ~]# kubectl get svc service-headliness -n dev -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service-headliness ClusterIP None <none> 80/TCP 11s app=nginx-pod # View service details [root@k8s-master01 ~]# kubectl describe svc service-headliness -n dev Name: service-headliness Namespace: dev Labels: <none> Annotations: <none> Selector: app=nginx-pod Type: ClusterIP IP: None Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.39:80,10.244.1.40:80,10.244.2.33:80 Session Affinity: None Events: <none> # View the resolution of the domain name [root@k8s-master01 ~]# kubectl exec -it pc-deployment-66cb59b984-8p84h -n dev /bin/sh / # cat /etc/resolv.conf nameserver 10.96.0.10 search dev.svc.cluster.local svc.cluster.local cluster.local [root@k8s-master01 ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.40 service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.39 service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.33
7.3.5 NodePort type Service

In the previous example, the ip address of the created service can only be accessed inside the cluster. If you want to expose the service to external use, you need to use another type of service, called NodePort. The working principle of NodePort is to map the port of the service to a port of the Node, and then you can access the service through NodeIp:NodePort.

Create service-nodeport.yaml

apiVersion: v1 kind: Service metadata: name: service-nodeport namespace: dev spec: selector: app: nginx-pod type: NodePort # service type ports: - port: 80 nodePort: 30002 # Specify the port of the bound node (the default value range is 30000-32767). If it is not specified, it will be allocated by default targetPort: 80
# Create service [root@k8s-master01 ~]# kubectl create -f service-nodeport.yaml service/service-nodeport created # View service [root@k8s-master01 ~]# kubectl get svc -n dev -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR service-nodeport NodePort 10.105.64.191 <none> 80:30002/TCP app=nginx-pod # Next, you can access port 30002 of any nodeip in the cluster through the browser of the host computer to access the pod
7.3.6 LoadBalancer type Service

LoadBalancer is very similar to NodePort. The purpose is to expose a port to the outside. The difference is that LoadBalancer will make a load balancing device outside the cluster. If the device needs the support of the external environment, the requests sent by external services to the device will be loaded by the device and forwarded to the cluster.

7.3.7 Service of externalname type

The service of externalname type is used to introduce services outside the cluster. It specifies the address of an external service through the externalname attribute, and then accesses the service inside the cluster to access the external service.

apiVersion: v1 kind: Service metadata: name: service-externalname namespace: dev spec: type: ExternalName # service type externalName: www.baidu.com #You can also change it to an ip address
# Create service [root@k8s-master01 ~]# kubectl create -f service-externalname.yaml service/service-externalname created # Domain name resolution [root@k8s-master01 ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local service-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com. www.baidu.com. 30 IN CNAME www.a.shifen.com. www.a.shifen.com. 30 IN A 39.156.66.18 www.a.shifen.com. 30 IN A 39.156.66.14
7.4 introduction to ingress

As mentioned in the previous course, there are two main ways for Service to expose services outside the cluster: NotePort and LoadBalancer, but both of them have certain disadvantages:

  • The disadvantage of NodePort mode is that it will occupy many ports of cluster machines. This disadvantage becomes more and more obvious when there are more cluster services
  • The disadvantage of LB mode is that each service needs an lb, which is wasteful and troublesome, and requires the support of devices other than kubernetes

Based on this situation, kubernetes provides an ingress resource object. Ingress only needs a NodePort or an LB to meet the requirement of exposing multiple services. The working mechanism is roughly shown in the figure below:

In fact, ingress is equivalent to a 7-layer load balancer. It is an abstraction of kubernetes' reverse proxy. Its working principle is similar to Nginx. It can be understood as establishing many mapping rules in ingress. Ingress Controller monitors these configuration rules and converts them into Nginx's reverse proxy configuration, and then provides services to the outside. Here are two core concepts:

  • ingress: an object in kubernetes that defines rules for how requests are forwarded to service s
  • ingress controller: a program that implements reverse proxy and load balancing. It parses the rules defined by ingress and forwards requests according to the configured rules. There are many ways to implement it, such as Nginx, Contour, Haproxy and so on

The working principle of Ingres (taking Nginx as an example) is as follows:

  1. The user writes an Ingress rule to specify which domain name corresponds to which Service in the kubernetes cluster
  2. The Ingress controller dynamically senses the changes of Ingress service rules, and then generates a corresponding Nginx reverse proxy configuration
  3. The Ingress controller will write the generated Nginx configuration to a running Nginx service and update it dynamically
  4. So far, what is really working is an Nginx, which is internally configured with user-defined request forwarding rules

7.5 use of ingress 7.5.1 environmental preparation to build an ingress environment
# create folder [root@k8s-master01 ~]# mkdir ingress-controller [root@k8s-master01 ~]# cd ingress-controller/ # Obtain ingress nginx. Version 0.30 is used in this case [root@k8s-master01 ingress-controller]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml [root@k8s-master01 ingress-controller]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml # Modify the repository in the mandatory.yaml file # Modify quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 # quay-mirror.qiniu.com/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 # Create ingress nginx [root@k8s-master01 ingress-controller]# kubectl apply -f ./ # View ingress nginx [root@k8s-master01 ingress-controller]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/nginx-ingress-controller-fbf967dd5-4qpbp 1/1 Running 0 12h # View service [root@k8s-master01 ingress-controller]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx NodePort 10.98.75.163 <none> 80:32240/TCP,443:31335/TCP 11h
7.5.2 preparing service and pod

For the convenience of later experiments, create the model shown in the figure below

Create tomcat-nginx.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: dev spec: replicas: 3 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.17.1 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment namespace: dev spec: replicas: 3 selector: matchLabels: app: tomcat-pod template: metadata: labels: app: tomcat-pod spec: containers: - name: tomcat image: tomcat:8.5-jre10-slim ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: nginx-service namespace: dev spec: selector: app: nginx-pod clusterIP: None type: ClusterIP ports: - port: 80 targetPort: 80 --- apiVersion: v1 kind: Service metadata: name: tomcat-service namespace: dev spec: selector: app: tomcat-pod clusterIP: None type: ClusterIP ports: - port: 8080 targetPort: 8080
# establish [root@k8s-master01 ~]# kubectl create -f tomcat-nginx.yaml # see [root@k8s-master01 ~]# kubectl get svc -n dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service ClusterIP None <none> 80/TCP 48s tomcat-service ClusterIP None <none> 8080/TCP 48s
7.5.3 Http agent

Create ingress-http.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-http namespace: dev spec: rules: - host: nginx.itheima.com http: paths: - path: / backend: serviceName: nginx-service servicePort: 80 - host: tomcat.itheima.com http: paths: - path: / backend: serviceName: tomcat-service servicePort: 8080
# establish [root@k8s-master01 ~]# kubectl create -f ingress-http.yaml ingress.extensions/ingress-http created # see [root@k8s-master01 ~]# kubectl get ing ingress-http -n dev NAME HOSTS ADDRESS PORTS AGE ingress-http nginx.itheima.com,tomcat.itheima.com 80 22s # View details [root@k8s-master01 ~]# kubectl describe ing ingress-http -n dev ... Rules: Host Path Backends ---- ---- -------- nginx.itheima.com / nginx-service:80 (10.244.1.96:80,10.244.1.97:80,10.244.2.112:80) tomcat.itheima.com / tomcat-service:8080(10.244.1.94:8080,10.244.1.95:8080,10.244.2.111:8080) ... # Next, configure the host file on the local computer and resolve the above two domain names to 192.168.109.100(master) # Then, you can visit tomcat.itheima.com:32240 and nginx.itheima.com:32240 to see the effect
7.5.4 Https agent

Create certificate

# Generate certificate openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=BJ/L=BJ/O=nginx/CN=itheima.com" # Create key kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Create ingress-https.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-https namespace: dev spec: tls: - hosts: - nginx.itheima.com - tomcat.itheima.com secretName: tls-secret # Specify secret key rules: - host: nginx.itheima.com http: paths: - path: / backend: serviceName: nginx-service servicePort: 80 - host: tomcat.itheima.com http: paths: - path: / backend: serviceName: tomcat-service servicePort: 8080
# establish [root@k8s-master01 ~]# kubectl create -f ingress-https.yaml ingress.extensions/ingress-https created # see [root@k8s-master01 ~]# kubectl get ing ingress-https -n dev NAME HOSTS ADDRESS PORTS AGE ingress-https nginx.itheima.com,tomcat.itheima.com 10.104.184.38 80, 443 2m42s # View details [root@k8s-master01 ~]# kubectl describe ing ingress-https -n dev ... TLS: tls-secret terminates nginx.itheima.com,tomcat.itheima.com Rules: Host Path Backends ---- ---- -------- nginx.itheima.com / nginx-service:80 (10.244.1.97:80,10.244.1.98:80,10.244.2.119:80) tomcat.itheima.com / tomcat-service:8080(10.244.1.99:8080,10.244.2.117:8080,10.244.2.120:8080) ... # The following can be accessed through the browser https://nginx.itheima.com:31335 And https://tomcat.itheima.com:31335 Here to see

28 November 2021, 18:56 | Views: 4741

Add new comment

For adding a comment, please log in
or create account

0 comments