Actual combat: Kubernetes network service experiment (success test - blog output) - 20211005

catalogue ...
catalogue
Write in front
Basic knowledge
Experimental environment
Experiment 1: defining service resources
Experiment 2: multi port Service definition services resource
Experiment 3: three common types of Service tests
Actual combat 4: Service Agent Mode: modify ipvs mode in kubedm mode
Actual Combat 5: Service DNS name test
summary

catalogue

Write in front

In this article, I will show you the Kubernetes network service experiment.

Theme of my blog: I hope everyone can make experiments with my blog, first do the experiments, and then understand the technical points in a deeper level in combination with theoretical knowledge, so as to have fun and motivation in learning. Moreover, the content steps of my blog are very complete. I also share the source code and the software used in the experiment. I hope I can make progress with you!

If you have any questions during the actual operation, you can contact me at any time to help you solve the problem for free:

  1. Personal wechat QR Code: x2675263825 (shede), qq: 2675263825.

  2. Personal blog address: www.onlyonexl.cn

  3. Personal WeChat official account: cloud native architect real battle

  4. Personal csdn

    https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

Basic knowledge

Experimental environment

Experimental environment: 1,win10,vmwrokstation Virtual machine; 2,k8s Cluster: 3 sets centos7.6 1810 Virtual machine, 1 master node,2 individual node node k8s version: v1.21 CONTAINER-RUNTIME: docker://20.10.7

Experiment 1: defining service resources

1. Write service.yaml

  • Here, we export two yaml files, deployment.yaml and service.yaml, by command
#Create a deployment first [root@k8s-master ~]#kubectl create deployment web --image=nginx --replicas=3 #1 export deployment.yaml [root@k8s-master ~]#kubectl create deployment web --image=nginx --dry-run=client --replicas=3 -o yaml > deployment.yaml #Export service.yaml [root@k8s-master ~]#kubectl expose deployment web --port=80 --target-port=80 --type=NodePort --dry-run=client -o yaml > service.yaml
  • Edit service.yaml and delete timestamp and other information. The final configuration is as follows
[root@k8s-master ~]#vim service.yaml apiVersion: v1 kind: Service metadata: labels: app: web name: web spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: web type: NodePort

2.apply service.yaml and view it

[root@k8s-master ~]#kubectl apply -f service.yaml service/web created [root@k8s-master ~]#kubectl get pod,svc #see

3. Let's see how the service is associated with the project's pod?

Let's see how the service is associated with the project's Pod? = > Service is associated with a set of pods through tags

  • At this point, we deploy another project
[root@k8s-master ~]#cp deployment.yaml deployment2.yaml [root@k8s-master ~]#cp service.yaml service2.yaml
  • Write deployment2.yaml
[root@k8s-master ~]#vim deployment2.yaml #Write deployment2.yaml, delete timestamp and other information, and modify deployment name apiVersion: apps/v1 kind: Deployment metadata: labels: app: web2 name: web2 spec: replicas: 3 selector: matchLabels: app: web2 strategy: {} template: metadata: labels: app: web2 spec: containers: - image: nginx name: nginx resources: {}

  • deployment2.yaml under apply
[root@k8s-master ~]#kubectl apply -f deployment2.yaml
  • Edit service2.yaml
[root@k8s-master ~]#vim service2.yaml #Modify the value of label to web2 apiVersion: v1 kind: Service metadata: labels: app: web2 name: web2 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: web2 type: NodePort

  • service.yaml under apply
[root@k8s-master ~]#kubectl apply -f service2.yaml
  • see
[root@k8s-master ~]#kubectl get pod,svc

At this point, two projects already exist: Web and web2:

  • So how does the service match the pod s of different items?

=>Service is associated with a set of pods through tags.

However, how to query the association between service and pod with commands?

  • Let's take a look at their labels
[root@k8s-master ~]#kubectl get pod --show-labels

We can see the tag contents in setcollector in service.yaml:

  • Method 1: then, the following methods can be used to confirm which pod s are included in a tag?
[root@k8s-master ~]#kubectl get pod -l app=web #Note that service will map the pod to a project no matter whether you are a pod created by deployment or not, as long as your tag is the matching content in its selector;

  • Method 2: this method can also be viewed
[root@k8s-master ~]#kubectl get endpoints #Abbreviated ep

  • Think of svc as nginx load balancer here: use nginx to configure upstream as the load balancer

Experiment 2: multi port Service definition services resource

Service Definition and creation `Multi port Service definition`: For some services, multiple ports need to be exposed, Service You also need to configure multiple port definitions, which are distinguished by port names. `Multi port Service definition` apiVersion: v1 kind: Service metadata: name: web spec: type: ClusterIP ports: - name: http port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443 selector: app: web

Experimental environment

Test on the basis of Experiment 1 environment.

1. Write the service.yaml file

[root@k8s-master ~]#cp service.yaml service3.yaml [root@k8s-master ~]#vim service3.yaml #Modify corresponding information apiVersion: v1 kind: Service metadata: labels: app: web name: web spec: ports: - port: 80 name: api1 #port: 80 name protocol: TCP targetPort: 80 #port: 80 - port: 81 name: api2 #port: 81 name protocol: TCP targetPort: 81 #port: 81 selector: app: web type: NodePort

2.apply service.yaml and view it

[root@k8s-master ~]#kubectl apply -f service3.yaml [root@k8s-master ~]#kubectl get svc

In general, it is rarely used in work. Normally, only one service port is provided in the pod.

This is the end of the experiment.

Experiment 3: three common types of Service tests

Experimental environment

It is tested on the basis of Experiment 2 above.

1. Cluster IP configuration in service

  • We generate a service yaml file and edit it
[root@k8s-master ~]#kubectl expose deployment web --port=80 --target-port=80 --dry-run=client -o yaml > service-clusterip.yaml #Note that when the -- type = parameter is not specified here, the default is Cluster IP
  • Edit the clusterip.yaml file
[root@k8s-master ~]#vim service-clusterip.yaml #Delete the timestamp and other information, and modify the service name and label name. apiVersion: v1 kind: Service metadata: labels: app: web6 name: web6 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: web6

  • Under apply
[root@k8s-master ~]#kubectl apply -f service-clusterip.yaml service/web6 created [root@k8s-master ~]#
  • View: we do not specify – type here. The default value is ClusterIP

  • Check this problem: the pod associated with this service is a little wrong... = > In fact, there are not many, which is the reason for the end port.

[root@k8s-master ~]#kubectl get service -o yaml|grep selector -A 1 #-A 1 represents the last line, - B represents the front line

At this time, we remove the multi port configuration and apply again, otherwise the experimental effect will be affected:

[root@k8s-master ~]#vim service3.yaml #Configure as follows apiVersion: v1 kind: Service metadata: labels: app: web name: web spec: ports: - port: 80 name: api1 protocol: TCP targetPort: 80 selector: app: web type: NodePort

Re apply and view:

  • Now let's continue to explain the above problem: take web pod as an example.

When we visit Cluster Ip, it will be forwarded to a group of pod s on the back end;

This Cluster ip is accessible to any POd or node within the cluster:

The cluster ip can be accessed on three node s:

This cluster ip can also be accessed in any pod:

Let's create a busybox container:

Go to the container and download it through wget to verify the above conclusion.

[root@k8s-master ~]#kubectl run bs --image=busybox -- sleep 24h #Run a container pod/bs created [root@k8s-master ~]#kubectl get pod #View pod NAME READY STATUS RESTARTS AGE bs 1/1 Running 0 8s web-96d5df5c8-7nv2b 1/1 Running 1 7h24m web-96d5df5c8-85mlv 1/1 Running 1 7h24m web-96d5df5c8-x27wr 1/1 Running 1 7h24m web2-7d78cf6476-cmsm5 1/1 Running 1 7h24m web2-7d78cf6476-ddvzs 1/1 Running 1 7h24m web2-7d78cf6476-zhk8n 1/1 Running 1 7h24m [root@k8s-master ~]#

This is the end of the experiment!

2. NodePort configuration in service

  • Create yaml file and modify
[root@k8s-master ~]#cp service-clusterip.yaml service-nodeport.yaml [root@k8s-master ~]#vim service-nodeport.yaml apiVersion: v1 kind: Service metadata: labels: app: web6 name: web6 spec: type: NodePort #Modify the service type to NodePort ports: - port: 80 protocol: TCP targetPort: 80 selector: app: web #Here, the tag is associated with the previous business web

  • apply and view
[root@k8s-master ~]#kubectl apply -f service-nodeport.yaml service/web6 configured [root@k8s-master ~]#

Note: at this time, this address can be accessed on all three nodes

  • Note: the reason why we can access this 30849 port is that there is a logic here:

All three nodes have the nodeport port to listen on;

The three nodes here do not listen to port 80:

  • We can manually specify the NodPort port number (but generally not)

We specify a nodePort

[root@k8s-master ~]#vim service-nodeport.yaml apiVersion: v1 kind: Service metadata: labels: app: web6 name: web6 spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 30006 #Revised to 30006 selector: app: web

  • apply and view

The browser can view:

At the same time, it can be seen that all three nodes are listening to the port 30006;

This is the end of the experiment!

Actual combat 4: Service Agent Mode: modify ipvs mode in kubedm mode

Original courseware

kubeadm Mode modification ipvs pattern: # kubectl edit configmap kube-proxy -n kube-system ... mode: "ipvs" ... # kubectl delete pod kube-proxy-btz4p -n kube-system Note: 1,kube-proxy Profile to configmap Mode storage 2,If you want all nodes to take effect, you need to rebuild all nodes kube-proxy pod Binary modification ipvs pattern: # vi kube-proxy-config.yml mode: ipvs ipvs: scheduler: "rr" # systemctl restart kube-proxy Note: the path of configuration file shall be subject to the actual installation directory

1. What is the proxy mode of the current service?

  • View the Kube proxy created by the daemon controller
[root@k8s-master ~]#kubectl get pod -n kube-system #View the Kube proxy created by the daemon controller NAME READY STATUS RESTARTS AGE calico-kube-controllers-6949477b58-hlxp7 1/1 Running 13 18d calico-node-c5sl6 1/1 Running 31 18d calico-node-l7q5q 1/1 Running 9 18d calico-node-sck2q 1/1 Running 8 18d etcd-k8s-master 1/1 Running 9 18d kube-apiserver-k8s-master 1/1 Running 10 18d kube-controller-manager-k8s-master 1/1 Running 10 18d kube-proxy-9249w 1/1 Running 12 18d kube-proxy-mj7l5 1/1 Running 8 18d kube-proxy-p9bd4 1/1 Running 9 18d kube-scheduler-k8s-master 1/1 Running 10 18d [root@k8s-master ~]#
  • Look at a pod log:
[root@k8s-master ~]#kubectl logs kube-proxy-9249w -n kube-system

From here, we can see what the proxy mode type of svc maintained by Kube proxy is: The default mode is ipatbles.

Kube proxy is deployed by the DaemonSet controller:

Note: however, this is not in the / etc / kubernetes / manifest / Directory:

2. Modify the configmap of Kube proxy online

[root@k8s-master ~]#kubectl edit configmap kube-proxy -n kube-system # take mode: "" Change to mode: "ipvs" [root@k8s-master ~]#

3. Restart Kube proxy

  • Kube proxy itself does not have a hot update mechanism, so you need to restart the Kube proxy:

Take a Kube proxy as an example:

Locate the Kube peoxy node first?

On node1:

Delete the Kube proxy pod on k8s master: (there are 3 pods in total)

[root@k8s-master ~]#kubectl delete pod kube-proxy-9249w -n kube-system

4. Install ipvsadm package on all nodes

  • Install the ipvsadm package on all nodes:
[root@k8s-node1 ~]#yum install -y ipvsadm [root@k8s-node2 ~]#yum install -y ipvsadm [root@k8s-master ~]#yum install -y ipvsadm

There are many ipvs rules when viewing node1 (the proxy mode of Kube proxy has just been changed to ipvs on node1): (k8s-node2 has the same effect)

virtual server and real server:

What is the process of accessing the cluster ip on node1?

5. View the transmission process of ipvs proxy mode package

cluster ip will be bound to kube-ipvs0 this virtual network card;

Different from iptables, iptables cannot see any actual network card information and follows iptables rules;

You need to access Nodeport according to this rule:

The experiment is over!

Actual Combat 5: Service DNS name test

1. coredns small test

  • Check the dns installed by kubedm
[root@k8s-master 2021-06-19]#kubectl get pod -A [root@k8s-master 2021-06-19]#kubectl get pod,svc -n kube-system

  • Go to the bs test container and check its dns resolution: it is found that the nameserver in the pod points to coredns;

  • Perform analytical testing

Try the following web6 svc tests:

cluster ip and svc names can be used here:

  • be careful

The nslookup pop-up error report in the busybox container is actually OK. It is the busybox:latest image problem. You can use the buxybox:1.28.4 image test or change other images for testing.

This is the end of the experiment!

2. Small experiment of ClusterIP A record format

  • Create a new namespace and deploy a pod
[root@k8s-master ~]#kubectl create ns test namespace/test created [root@k8s-master ~]#kubectl create deployment web --image=nginx -n test deployment.apps/web created [root@k8s-master ~]#kubectl expose deployment web --port=80 --target-port=80 -n test service/web exposed [root@k8s-master ~]# [root@k8s-master ~]#kubectl get pod,svc -n test NAME READY STATUS RESTARTS AGE pod/web-96d5df5c8-tj7pr 1/1 Running 0 2m52s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/web ClusterIP 10.111.132.151 <none> 80/TCP 2m22s [root@k8s-master ~]#
  • Create a busybox in the default namespace: 1.28.4 pod
[root@k8s-master ~]#kubectl run bs3 --image=busybox:1.28.4 -- sleep 24h pod/bs3 created [root@k8s-master ~]#kubectl get pod NAME READY STATUS RESTARTS AGE bs 1/1 Running 2 15h bs3 1/1 Running 0 34s web-96d5df5c8-7nv2b 1/1 Running 3 22h web-96d5df5c8-85mlv 1/1 Running 3 22h web-96d5df5c8-x27wr 1/1 Running 3 22h web2-7d78cf6476-cmsm5 1/1 Running 3 22h web2-7d78cf6476-ddvzs 1/1 Running 3 22h web2-7d78cf6476-zhk8n 1/1 Running 3 22h [root@k8s-master ~]#
  • Question: can svc under the - n test namespace be resolved in the busybox container now?

By default, the domain name is resolved to the default namespace. If you want to resolve to other namespaces, you need to follow the namespace;

It is suggested that the name can be written in full, for example: my-svc.my-namespace.svc.cluster.local

The experiment is over!

summary

Well, that's all for the Kubernetes network service experiment. Thank you for reading. Finally, I'll post a US dollar photo. I wish you a happy life and a meaningful life every day. See you next time!

6 October 2021, 08:50 | Views: 1413

Add new comment

For adding a comment, please log in
or create account

0 comments