Quick start to kubernetes series tutorial

Write before

Kubernetes involves a lot of concepts, including all kinds of technologies in the cloud eco-community. The cost of learning is relatively high. In k8s, yaml files are usually written to complete the deployment of resources, which is a high barrier for more beginners. This paper will act as an agent to get started quickly in the form of command line, overlooking the core concepts of kubernetes, and get started quickly.

1. Basic concepts

1.1 Clusters and Nodes

kubernetes is an open source container engine management platform, which implements automated deployment of containerized applications, task scheduling, resilience scaling, load balancing and other functions. The cluster is composed of master and node roles, in which master is responsible for managing the cluster. The master node is composed of kube-apiserver, kube-controller-manager, kube-scheduler, etcd roles, and the node node runs in practice.The application consists of Container Runtime, kubelet, and kube-proxy, where Container Runtime may be Docker, rke, containerd, and node nodes may be composed of physical or virtual machines.

1. View master component roles

[root@node-1 ~]# kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

2. View node list

[root@node-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    master   26h   v1.14.1
node-2   Ready    <none>   26h   v1.14.1
node-3   Ready    <none>   26h   v1.14.1

3. View node details

[root@node-1 ~]# kubectl describe node node-3
Name:               node-3
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64. #Labels and Annotations
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node-3
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"22:f8:75:bb:da:4e"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.254.100.103
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 10 Aug 2019 17:50:00 +0800
Taints:             <none>
Unschedulable:      false. #Whether scheduling is disabled, the identity bit controlled by the cordon command.
Conditions:     #Resource scheduling capability, memory Pressure memory pressure (i.e., insufficient memory)
                #DiskPressure Disk Pressure
                #PIDPressure Disk Pressure
                #Ready, Ready, indicating whether the node is working properly, indicating that resources are available + the related process is working properly
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sun, 11 Aug 2019 20:32:07 +0800   Sat, 10 Aug 2019 17:50:00 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 11 Aug 2019 20:32:07 +0800   Sat, 10 Aug 2019 17:50:00 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sun, 11 Aug 2019 20:32:07 +0800   Sat, 10 Aug 2019 17:50:00 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sun, 11 Aug 2019 20:32:07 +0800   Sat, 10 Aug 2019 18:04:20 +0800   KubeletReady                 kubelet is posting ready status
Addresses:     #Address and hostname
  InternalIP:  10.254.100.103
  Hostname:    node-3
Capacity:      #Resource capacity of containers
 cpu:                2
 ephemeral-storage:  51473868Ki
 hugepages-2Mi:      0
 memory:             3880524Ki
 pods:               110
Allocatable:    #Allocated resources
 cpu:                2
 ephemeral-storage:  47438316671
 hugepages-2Mi:      0
 memory:             3778124Ki
 pods:               110
System Info:     #System information, such as kernel version, operating system version, cpu architecture, node node software version
 Machine ID:                 0ea734564f9a4e2881b866b82d679dfc
 System UUID:                D98ECAB1-2D9E-41CC-9A5E-51A44DC5BB97
 Boot ID:                    6ec81f5b-cb05-4322-b47a-a8e046d9bf79
 Kernel Version:             3.10.0-957.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.1. #Container Runtime is docker, version 18.3.1
 Kubelet Version:            v1.14.1               #kubelet version
 Kube-Proxy Version:         v1.14.1               #kube-proxy version
PodCIDR:                     10.244.2.0/24         #Network used by pod
Non-terminated Pods:         (4 in total).         #Here are the resource usage per pod
  Namespace                  Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                           ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-fb8b8dccf-hrqm8        100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     26h
  kube-system                coredns-fb8b8dccf-qwwks        100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     26h
  kube-system                kube-flannel-ds-amd64-zzm2g    100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      26h
  kube-system                kube-proxy-x8zqh               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26h
Allocated resources:   #Allocated resources
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                300m (15%)  100m (5%)
  memory             190Mi (5%)  390Mi (10%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:              <none>

1.2 Containers and Applications

Kubernetes is the container orchestration engine, which is responsible for container scheduling, management and container operation. However, the smallest unit of kubernetes scheduling is not container, but pod. A pod can contain multiple containers. Usually, pod is not run directly in the cluster, but through various controllers such as Deployments, ReplicaSets, DaemonSets. What?Because the controller ensures the consistency of the pod state, as officially described,'make sure the current state match to the desire state', ensures that the current state is consistent with the expected state. Simply put, the pod is abnormal, and the controller rebuilds at other nodes to ensure that the pod currently running in the cluster is consistent with the expected settings.

  • Container, a lightweight virtualization technology, enables easy deployment and distribution of applications by encapsulating them in a mirror.
  • Pod, the smallest dispatching unit in kubernetes, encapsulates containers that contain a pause container and an application container, sharing the same namespace, network, storage, and sharing processes between containers.
  • Deployments, also known as deployment groups, are strictly stateless applications. Another stateless application is StatefulSets. Deployments is a controller that controls the replicas of an application and controls the status of the replicas through Deployments Controller in kube-controller-manager.

1.3 Service Access

The pod in kubernetes is the carrier that actually runs. The pod is attached to the node. The node may fail. The controller of kubernetes, such as replicasets, will pull up a pod on other nodes, and the new pod will assign a new IP. Furthermore, the application deployment will contain multiple copies of replicas, just like an application deployment deployment deployments deploying three copies of pod, which is equivalent to the Real Server on the back end.How do you access these three applications?In this case, we usually add a load balancing Load Balancer in front of Real Server. Service is the load balancing scheduler of pod. Service abstracts dynamic pod as a service. Application can access service directly, and service automatically forwards requests to back-end pod.There are two mechanisms responsible for service forwarding rules: iptables and ipvs, iptables achieve load balancing by setting rules such as DNA T, and IPVS sets forwarding rules through ipvsadm.

Depending on how services are accessed, services are categorized into the following types: ClusterIP, NodePort, LoadBalancer, and _ExternalName, which can be set by type.

  • ClusterIP, intra-cluster mutual access, and DNS are combined to achieve intra-cluster service discovery;
  • NodePort, which exposes a port for each node through NAT for external access;
  • LoadBalancer, an interface for external access for cloud vendors, relies on cloud service providers to implement specific technical details, such as Tencent Cloud integration with CBB.
  • ExternalName, which exposes the service name through the service name, is currently implemented by ingress and forwards external requests to the cluster in the form of domain name forwarding. It depends on specific external implementations, such as nginx, traefik, and access details by major cloud computing vendors.

    Pod is dynamic, ip address may change (such as node failure), replica number may change, such as the application of extended scale up, the application of lock scale down, etc. How can service recognize the dynamic change of pod?The answer is labels, which automatically filters out the Endpoints of an application and automatically updates them when the pod changes. Different applications are made up of different labels.For labels, you can refer to the following https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

2. Create an application

We started deploying an application called deployments. Kubernetes contains various workloads such as Deployments with stateless words, StatefulSets with statefulness, DaemonSets with daemon processes, and workloads in the United States and China correspond to different application scenarios. We started with Deployments as an example, and other workloads are similar. Generally, deploying applications in kubernetes is done as yaml filesDeployment, for beginners, writing yaml files is too lengthy for beginners to learn, let's first implement the access of API by kubectl command line.

1. Deploy nginx applications and deploy three copies

[root@node-1 ~]# kubectl run nginx-app-demo --image=nginx:1.7.9 --port=80 --replicas=3 
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-app-demo created

2. Look at the application list, you can see that the current pod state is normal, Ready is the current state, AVAILABLE is the target state

[root@node-1 ~]# kubectl get deployments
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app-demo   3/3     3            3           72s

3. Look at the details of the application. As we can see below, Deployments controls the number of copies through ReplicaSets and the number of pod s through ReplicaSets

[root@node-1 ~]# kubectl describe deployments nginx-app-demo 
Name:                   nginx-app-demo     #apply name
Namespace:              default            #Namespace
CreationTimestamp:      Sun, 11 Aug 2019 21:52:32 +0800
Labels:                 run=nginx-app-demo #Labels, important, subsequent service s are accessed through labels
Annotations:            deployment.kubernetes.io/revision: 1 #Rolling Upgrade Version Number
Selector:               run=nginx-app-demo #selector for labels
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable #Replica Controller
StrategyType:           RollingUpdate     #Upgrade policy to RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge #RollingUpdate upgrade policy, i.e. pod up to 25%
Pod Template:   #Container application template, including mirror, port, storage, etc.
  Labels:  run=nginx-app-demo
  Containers:
   nginx-app-demo:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:  #current state
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-app-demo-7bdfd97dcd (3/3 replicas created) #ReplicaSets Controller Name
Events:  #Run Events
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m24s  deployment-controller  Scaled up replica set nginx-app-demo-7bdfd97dcd to 3

4. Look at replicasets and see that the replicasets replica controller generated three pod s

1. See replicasets list
  [root@node-1 ~]# kubectl get replicasets
NAME                        DESIRED   CURRENT   READY   AGE
nginx-app-demo-7bdfd97dcd   3         3         3       9m9s

2. See replicasets details
[root@node-1 ~]# kubectl describe replicasets nginx-app-demo-7bdfd97dcd 
Name:           nginx-app-demo-7bdfd97dcd
Namespace:      default
Selector:       pod-template-hash=7bdfd97dcd,run=nginx-app-demo
Labels:         pod-template-hash=7bdfd97dcd #labels, add a hash label recognition replicasets
                run=nginx-app-demo
Annotations:    deployment.kubernetes.io/desired-replicas: 3 #Rolling Upgrade Information, Replica Tree, Maximum Number, Application Version
                deployment.kubernetes.io/max-replicas: 4
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/nginx-app-demo #Parent control of the copy, for nginx-app-demo this Deployments
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:  #Container template, inherited from deployments
  Labels:  pod-template-hash=7bdfd97dcd
           run=nginx-app-demo
  Containers:
   nginx-app-demo:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events: #Event log, generating three different pod s
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  9m25s  replicaset-controller  Created pod: nginx-app-demo-7bdfd97dcd-hsrft
  Normal  SuccessfulCreate  9m25s  replicaset-controller  Created pod: nginx-app-demo-7bdfd97dcd-qtbzd
  Normal  SuccessfulCreate  9m25s  replicaset-controller  Created pod: nginx-app-demo-7bdfd97dcd-7t72x

5. View the pod and apply the deployed carrier. A nginx container is deployed in the pod and an IP is assigned to access the application directly through the ip.

1. See pod List of, and replicasets Generated names are consistent
[root@node-1 ~]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
nginx-app-demo-7bdfd97dcd-7t72x   1/1     Running   0          13m
nginx-app-demo-7bdfd97dcd-hsrft   1/1     Running   0          13m
nginx-app-demo-7bdfd97dcd-qtbzd   1/1     Running   0          13m

//View details of the pod
[root@node-1 ~]# kubectl describe pods nginx-app-demo-7bdfd97dcd-7t72x 
Name:               nginx-app-demo-7bdfd97dcd-7t72x
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node-3/10.254.100.103
Start Time:         Sun, 11 Aug 2019 21:52:32 +0800
Labels:             pod-template-hash=7bdfd97dcd  #labels name
                    run=nginx-app-demo
Annotations:        <none>
Status:             Running
IP:                 10.244.2.4 #ip address of pod
Controlled By:      ReplicaSet/nginx-app-demo-7bdfd97dcd #Replica controller is replicasets
Containers:   #Container information, including container id, mirror, drop button, state, environment variable, etc.
  nginx-app-demo:
    Container ID:   docker://5a0e5560583c5929e9768487cef43b045af4c6d3b7b927d9daf181cb28867766
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 11 Aug 2019 21:52:40 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-txhkc (ro)
Conditions: #Status conditions of containers
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:    #Container Volume
  default-token-txhkc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-txhkc
    Optional:    false
QoS Class:       BestEffort #QOS Type
Node-Selectors:  <none> #Stain type
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:  #Event state, mirror pull, launch container
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  14m   default-scheduler  Successfully assigned default/nginx-app-demo-7bdfd97dcd-7t72x to node-3
  Normal  Pulling    14m   kubelet, node-3    Pulling image "nginx:1.7.9"
  Normal  Pulled     14m   kubelet, node-3    Successfully pulled image "nginx:1.7.9"
  Normal  Created    14m   kubelet, node-3    Created container nginx-app-demo
  Normal  Started    14m   kubelet, node-3    Started container nginx-app-demo

3. Accessing applications

kubernetes assigns an ip address to each pod, through which an application can be accessed directly, which is equivalent to accessing RS, but an application is a whole, consists of multiple copies, and relies on services to achieve load balancing of the application. service explores how ClusterIP and NodePort are accessed.

3.1 Access to Pod IP

1. Set up the content of the pod. For easy distinction, we set the nginx site content of the three pods to be different to see the effect of load balancing

See pod list
[root@node-1 ~]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
nginx-app-demo-7bdfd97dcd-7t72x   1/1     Running   0          28m
nginx-app-demo-7bdfd97dcd-hsrft   1/1     Running   0          28m
nginx-app-demo-7bdfd97dcd-qtbzd   1/1     Running   0          28m

//Enter the pod container
[root@node-1 ~]# kubectl exec -it nginx-app-demo-7bdfd97dcd-7t72x /bin/bash

//Set up site content
[root@nginx-app-demo-7bdfd97dcd-7t72x:/# echo "web1" >/usr/share/nginx/html/index.html

//By analogy, set the contents of the other two pod s to web2 and web3
[root@nginx-app-demo-7bdfd97dcd-hsrft:/# echo web2 >/usr/share/nginx/html/index.html
[root@nginx-app-demo-7bdfd97dcd-qtbzd:/# echo web3 >/usr/share/nginx/html/index.html

2. Get the ip address of the pod, how to get the ip address of the pod quickly? You can display more content through the -o wide parameter, which will include the node and ip to which the pod belongs.

[root@node-1 ~]# kubectl get pods -o wide 
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-app-demo-7bdfd97dcd-7t72x   1/1     Running   0          34m   10.244.2.4   node-3   <none>           <none>
nginx-app-demo-7bdfd97dcd-hsrft   1/1     Running   0          34m   10.244.1.2   node-2   <none>           <none>
nginx-app-demo-7bdfd97dcd-qtbzd   1/1     Running   0          34m   10.244.1.3   node-2   <none>           <none>

3. Visit the ip of the pod to see the content of the site, different pod site content is consistent with the above steps.

[root@node-1 ~]# curl http://10.244.2.4
web1
[root@node-1 ~]# curl http://10.244.1.2
web2
[root@node-1 ~]# curl http://10.244.1.3
web3

3.2 ClusterIP Access

Direct ip access applications through pods can be implemented for single pod applications, but applications with multiple replicas do not meet the requirements. Load balancing needs to be achieved through services, which need to set different type s, defaulting to ClusterIP, which means internal cluster access. Services are exposed to services through the expose subcommand as follows.

1. Expose the service, where port represents the proxy listening port, target-port represents the port of the container, type sets the type of service

[root@node-1 ~]# kubectl expose deployment nginx-app-demo --name nginx-service-demo \
--port=80 \
--protocol=TCP \
--target-port=80 \
--type ClusterIP 
service/nginx-service-demo exposed

2. Check the details of the service to see that the service automatically generates endpoints for the pod's ip via the labels selector

See service List, showing two, kubernetes Created for the default cluster service
[root@node-1 ~]# kubectl get services
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes           ClusterIP   10.96.0.1    <none>        443/TCP   29h
nginx-service-demo   ClusterIP   10.102.1.1   <none>        80/TCP    2m54s

//Looking at the service details, you can see that Labels'eletor matches the previous Deployments settings, and Endpoints makes up a list of pod s
[root@node-1 ~]# kubectl describe services nginx-service-demo 
Name:              nginx-service-demo   #Name
Namespace:         default              #Namespace
Labels:            run=nginx-app-demo   #Label Name
Annotations:       <none>
Selector:          run=nginx-app-demo   #tag chooser
Type:              ClusterIP            #service type is ClusterIP
IP:                10.102.1.1           #The ip of the service, vip, is automatically assigned within the cluster
Port:              <unset>  80/TCP      #Service Port, which is the port ClusterIP accesses outdoors
TargetPort:        80/TCP               #Container Port
Endpoints:         10.244.1.2:80,10.244.1.3:80,10.244.2.4:80 #Access Address List
Session Affinity:  None                 #Load Balance Scheduling Algorithm
Events:            <none>

3. By accessing the address of the service, you can see that the service automatically balances the load of the pods. The scheduling strategy is polling. Why?Because the service's default dispatch policy, Session Affinity, is None, which is a round-robin training, can be set to ClientIP to maintain session, and requests from the same client IP are dispatched to the same pod.

[root@node-1 ~]# curl http://10.102.1.1
web3
[root@node-1 ~]# curl http://10.102.1.1
web1
[root@node-1 ~]# curl http://10.102.1.1
web2
[root@node-1 ~]# curl http://10.102.1.1

4. In-depth analysis of ClusterIP principle, there are two mechanisms to implement service backend: iptables and ipvs. Environment installation uses iptables, iptables generate access rules through nat chain, KUBE-SVC-R5Y5DZHD7Q6DDTFZ is inbound DNA T forwarding rule, KUBE-MARK-MASQ is outbound forwarding rule.

[root@node-1 ~]# iptables -t nat -L -n
Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.102.1.1           /* default/nginx-service-demo: cluster IP */ tcp dpt:80
KUBE-SVC-R5Y5DZHD7Q6DDTFZ  tcp  --  0.0.0.0/0            10.102.1.1           /* default/nginx-service-demo: cluster IP */ tcp dpt:80

Outbound: When the KUBE-MARK-MASQ source address segment is not 10.244.0.0/16 accessing target port 80 of 10.102.1.1, forward the request to the KUBE-MARK-MASQ chain
 Inbound: Forward request to KUBE-SVC-R5Y5DZHD7Q6DDTFZ chain when any original address accesses target port 80 of target 10.102.1.1

5. View the inbound request rules. The inbound request rules will be mapped to different chains, and different chains will be forwarded to different pod ip s.

1. View Inbound Rules KUBE-SVC-R5Y5DZHD7Q6DDTFZ,Request will be forwarded to three chains
[root@node-1 ~]# iptables -t nat -L KUBE-SVC-R5Y5DZHD7Q6DDTFZ -n
Chain KUBE-SVC-R5Y5DZHD7Q6DDTFZ (1 references)
target     prot opt source               destination         
KUBE-SEP-DSWLUQNR4UPH24AX  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.33332999982
KUBE-SEP-56SLMGHHOILJT36K  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.50000000000
KUBE-SEP-K6G4Z74HQYF6X7SI  all  --  0.0.0.0/0            0.0.0.0/0 

2. View the rules for the three chains actually forwarded,Actually map to different pod Of ip Address
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-DSWLUQNR4UPH24AX  -n
Chain KUBE-SEP-DSWLUQNR4UPH24AX (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.2           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.2:80

[root@node-1 ~]# iptables -t nat -L KUBE-SEP-56SLMGHHOILJT36K  -n
Chain KUBE-SEP-56SLMGHHOILJT36K (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.3           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.3:80

[root@node-1 ~]# iptables -t nat -L KUBE-SEP-K6G4Z74HQYF6X7SI   -n
Chain KUBE-SEP-K6G4Z74HQYF6X7SI (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.2.4           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.2.4:80     

3.3 NodePort Access

Services can only provide access to applications within a cluster through ClusterIP, but can not access applications directly from outside. If external access is required, there are several ways: NodePort, LoadBalancer and Ingress, where LoadBalancer needs to be implemented by a cloud Service provider, Ingress needs to install a separate Ingress Controller, daily testing can be done through NodePort, and NodePort canExpose a port of the node to external network access.

1. Modify the type of the type from ClusterIP to NodePort (or recreate it, specifying that the type is NodePort)

1. adopt patch modify type Type of
[root@node-1 ~]# kubectl patch services nginx-service-demo -p '{"spec":{"type": "NodePort"}}'
service/nginx-service-demo patched

2. confirm yaml File configuration, assigned a NodePort Ports, that is, each node Will listen on this port
[root@node-1 ~]# kubectl get services nginx-service-demo -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-08-11T14:35:59Z"
  labels:
    run: nginx-app-demo
  name: nginx-service-demo
  namespace: default
  resourceVersion: "157676"
  selfLink: /api/v1/namespaces/default/services/nginx-service-demo
  uid: 55e29b78-bc45-11e9-b073-525400490421
spec:
  clusterIP: 10.102.1.1
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 32416 #A NodePort port is automatically assigned
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-app-demo
  sessionAffinity: None
  type: NodePort #Modify type to NodePort
status:
  loadBalancer: {}

3. See service List, you know service Of type Has been modified to NodePort,Also retained ClusterIP Access to IP
[root@node-1 ~]# kubectl get services
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes           ClusterIP   10.96.0.1    <none>        443/TCP        30h
nginx-service-demo   NodePort    10.102.1.1   <none>        80:32416/TCP   68m

2. Accessing applications through NodePort, where each node's address corresponds to vip, achieves the same load balancing effect and CluserIP functionality is available

1. NodePort Load balancing
[root@node-1 ~]# curl http://node-1:32416
web1
[root@node-1 ~]# curl http://node-2:32416
web1
[root@node-1 ~]# curl http://node-3:32416
web1
[root@node-1 ~]# curl http://node-3:32416
web3
[root@node-1 ~]# curl http://node-3:32416
web2

2. ClusterIP Load balancing
[root@node-1 ~]# curl http://10.102.1.1
web2
[root@node-1 ~]# curl http://10.102.1.1
web1
[root@node-1 ~]# curl http://10.102.1.1
web1
[root@node-1 ~]# curl http://10.102.1.1
web3

3. NodePort forwarding principle, each node listens on the port of NodePort through kube-proxy, and the port is forwarded by iptables on the back end

1. NodePort Listening Port
[root@node-1 ~]# netstat -antupl |grep 32416
tcp6       0      0 :::32416                :::*                    LISTEN      32052/kube-proxy 

2. See nat Table forwarding rules, there are two rules KUBE-MARK-MASQ Export and KUBE-SVC-R5Y5DZHD7Q6DDTFZ Inbound direction.
Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/nginx-service-demo: */ tcp dpt:32416
KUBE-SVC-R5Y5DZHD7Q6DDTFZ  tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/nginx-service-demo: */ tcp dpt:32416

3. View inbound request rule chain KUBE-SVC-R5Y5DZHD7Q6DDTFZ 
[root@node-1 ~]# iptables -t nat -L KUBE-SVC-R5Y5DZHD7Q6DDTFZ  -n
Chain KUBE-SVC-R5Y5DZHD7Q6DDTFZ (2 references)
target     prot opt source               destination         
KUBE-SEP-DSWLUQNR4UPH24AX  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.33332999982
KUBE-SEP-56SLMGHHOILJT36K  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.50000000000
KUBE-SEP-K6G4Z74HQYF6X7SI  all  --  0.0.0.0/0            0.0.0.0/0          

4. Continue to view forwarding chains containing DNAT Forwarding and KUBE-MARK-MASQ And rules for outbound returns
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-DSWLUQNR4UPH24AX  -n
Chain KUBE-SEP-DSWLUQNR4UPH24AX (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.2           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.2:80

[root@node-1 ~]# iptables -t nat -L KUBE-SEP-56SLMGHHOILJT36K  -n
Chain KUBE-SEP-56SLMGHHOILJT36K (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.3           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.3:80

[root@node-1 ~]# iptables -t nat -L KUBE-SEP-K6G4Z74HQYF6X7SI   -n
Chain KUBE-SEP-K6G4Z74HQYF6X7SI (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.2.4           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.2.4:80

4. Extend applications

When the application load is too high to satisfy the application request, we usually expand the number of RS. In kubernetes, expanding RS is actually done by expanding the number of replicas, which makes expanding RS very convenient and fast to achieve flexible scaling.kubernets can provide two scaling capabilities: 1. manual scaling capabilities scale up and scale down, 2. dynamic elastic scaling horizontalpodautoscalers, based on the utilization rate of CPU to achieve automatic elastic scaling, need to rely on and monitor components such as metrics server, which is not currently implemented, and will be further explored later. This paper expands the number of copies of applications by manual scaling.

1. Manually expand the number of copies

[root@node-1 ~]# kubectl scale  --replicas=4 deployment nginx-app-demo 
deployment.extensions/nginx-app-demo scaled

2. View replica extensions and deployments automatically deploy an application

[root@node-1 ~]# kubectl get deployments
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app-demo   4/4     4            4           133m

3. What happens to the service at this time?View service details, new extended pod s will be automatically updated to service endpoints, automatic service discovery

See service details
[root@node-1 ~]# kubectl describe services nginx-service-demo 
Name:                     nginx-service-demo
Namespace:                default
Labels:                   run=nginx-app-demo
Annotations:              <none>
Selector:                 run=nginx-app-demo
Type:                     NodePort
IP:                       10.102.1.1
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32416/TCP
Endpoints:                10.244.1.2:80,10.244.1.3:80,10.244.2.4:80 + 1 more...#Address has been added automatically
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

//View endpioints details
[root@node-1 ~]# kubectl describe endpoints nginx-service-demo  
Name:         nginx-service-demo
Namespace:    default
Labels:       run=nginx-app-demo
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2019-08-11T16:04:56Z
Subsets:
  Addresses:          10.244.1.2,10.244.1.3,10.244.2.4,10.244.2.5
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  80    TCP

Events:  <none>

4. Test, set the new pod site content to web4, refer to the previous setup method, test the ip of service to see the load balancing effect

[root@node-1 ~]# curl http://10.102.1.1
web4
[root@node-1 ~]# curl http://10.102.1.1
web4
[root@node-1 ~]# curl http://10.102.1.1
web2
[root@node-1 ~]# curl http://10.102.1.1
web3
[root@node-1 ~]# curl http://10.102.1.1
web1
[root@node-1 ~]# curl http://10.102.1.1
web2
[root@node-1 ~]# curl http://10.102.1.1
web1

Thus, elastic scaling is automatically added to the service to achieve service auto-discovery and load balancing, and the application scaling is much faster than traditional applications.

5. Rolling Upgrade

When updating an application in kubernetes, you can package the application in a mirror and then update the application's mirror to upgrade.The default Deployments upgrade strategy is RollingUpdate, which updates 25% of the pods in the application each time, creating new pods to replace one by one, preventing the application from becoming unavailable during the upgrade process.At the same time, if the application upgrade process fails, the application can also be rolled back to its previous state by rollback, which is achieved by replicasets.

1. Replace the mirror of nginx, upgrade the application to the latest version, open another window and use kubectl get pods -w to watch the upgrade process

[root@node-1 ~]# kubectl set image deployments/nginx-app-demo nginx-app-demo=nginx:latest
deployment.extensions/nginx-app-demo image updated

2. Watch the upgrade process and see that the upgrade process is a new + delete way to replace the pod one by one

[root@node-1 ~]# kubectl get pods -w
NAME                              READY   STATUS    RESTARTS   AGE
nginx-app-demo-7bdfd97dcd-7t72x   1/1     Running   0          145m
nginx-app-demo-7bdfd97dcd-hsrft   1/1     Running   0          145m
nginx-app-demo-7bdfd97dcd-j6lgd   1/1     Running   0          12m
nginx-app-demo-7bdfd97dcd-qtbzd   1/1     Running   0          145m
nginx-app-demo-5cc8746f96-xsxz4   0/1     Pending   0          0s #Create a new pod
nginx-app-demo-5cc8746f96-xsxz4   0/1     Pending   0          0s
nginx-app-demo-7bdfd97dcd-j6lgd   1/1     Terminating   0          14m #Delete old pod, replace
nginx-app-demo-5cc8746f96-xsxz4   0/1     ContainerCreating   0          0s
nginx-app-demo-5cc8746f96-s49nv   0/1     Pending             0          0s #New Second pod
nginx-app-demo-5cc8746f96-s49nv   0/1     Pending             0          0s
nginx-app-demo-5cc8746f96-s49nv   0/1     ContainerCreating   0          0s
nginx-app-demo-7bdfd97dcd-j6lgd   0/1     Terminating         0          14m #Replace the second pod
nginx-app-demo-5cc8746f96-s49nv   1/1     Running             0          7s
nginx-app-demo-7bdfd97dcd-qtbzd   1/1     Terminating         0          146m
nginx-app-demo-5cc8746f96-txjqh   0/1     Pending             0          0s
nginx-app-demo-5cc8746f96-txjqh   0/1     Pending             0          0s
nginx-app-demo-5cc8746f96-txjqh   0/1     ContainerCreating   0          0s
nginx-app-demo-7bdfd97dcd-j6lgd   0/1     Terminating         0          14m
nginx-app-demo-7bdfd97dcd-j6lgd   0/1     Terminating         0          14m
nginx-app-demo-5cc8746f96-xsxz4   1/1     Running             0          9s
nginx-app-demo-5cc8746f96-txjqh   1/1     Running             0          1s
nginx-app-demo-7bdfd97dcd-hsrft   1/1     Terminating         0          146m
nginx-app-demo-7bdfd97dcd-qtbzd   0/1     Terminating         0          146m
nginx-app-demo-5cc8746f96-rcpmw   0/1     Pending             0          0s
nginx-app-demo-5cc8746f96-rcpmw   0/1     Pending             0          0s
nginx-app-demo-5cc8746f96-rcpmw   0/1     ContainerCreating   0          0s
nginx-app-demo-7bdfd97dcd-7t72x   1/1     Terminating         0          146m
nginx-app-demo-7bdfd97dcd-7t72x   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-hsrft   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-hsrft   0/1     Terminating         0          147m
nginx-app-demo-5cc8746f96-rcpmw   1/1     Running             0          2s
nginx-app-demo-7bdfd97dcd-7t72x   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-7t72x   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-hsrft   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-hsrft   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-qtbzd   0/1     Terminating         0          147m
nginx-app-demo-7bdfd97dcd-qtbzd   0/1     Terminating         0          147m

3. Check the details of the deployments again to see that the deployments have replaced the new replicasets. The original replicasets version is 1 and can be used for rollback.

[root@node-1 ~]# kubectl describe deployments nginx-app-demo 
Name:                   nginx-app-demo
Namespace:              default
CreationTimestamp:      Sun, 11 Aug 2019 21:52:32 +0800
Labels:                 run=nginx-app-demo
Annotations:            deployment.kubernetes.io/revision: 2 #New version number for rollback
Selector:               run=nginx-app-demo
Replicas:               4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx-app-demo
  Containers:
   nginx-app-demo:
    Image:        nginx:latest
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-app-demo-5cc8746f96 (4/4 replicas created) #A new replicaset, actually replacing the new replicasets
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  19m    deployment-controller  Scaled up replica set nginx-app-demo-7bdfd97dcd to 4
  Normal  ScalingReplicaSet  4m51s  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 1
  Normal  ScalingReplicaSet  4m51s  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 3
  Normal  ScalingReplicaSet  4m51s  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 2
  Normal  ScalingReplicaSet  4m43s  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 2
  Normal  ScalingReplicaSet  4m43s  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 3
  Normal  ScalingReplicaSet  4m42s  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 1
  Normal  ScalingReplicaSet  4m42s  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 4
  Normal  ScalingReplicaSet  4m42s  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 0

4. Looking at the version of the rolling upgrade, you can see two versions corresponding to two different replicasets

[root@node-1 ~]# kubectl rollout history deployment nginx-app-demo 
deployment.extensions/nginx-app-demo 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

//View the replicasets list, old containing pod 0
[root@node-1 ~]# kubectl get replicasets
NAME                        DESIRED   CURRENT   READY   AGE
nginx-app-demo-5cc8746f96   4         4         4       9m2s
nginx-app-demo-7bdfd97dcd   0         0         0       155m

5. Test the upgrade of the application and find that nginx has been upgraded to the latest nginx/1.17.2 version

[root@node-1 ~]# curl -I http://10.102.1.1
HTTP/1.1 200 OK
Server: nginx/1.17.2 #nginx version information
Date: Sun, 11 Aug 2019 16:30:03 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT
Connection: keep-alive
ETag: "5d36f361-264"
Accept-Ranges: bytes

6. Roll back to the old version

[root@node-1 ~]# kubectl rollout undo deployment nginx-app-demo --to-revision=1
deployment.extensions/nginx-app-demo rolled back

//Test the application again and roll back to the old version.
[root@node-1 ~]# curl -I http://10.102.1.1
HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 11 Aug 2019 16:34:33 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT
Connection: keep-alive
ETag: "54999765-264"
Accept-Ranges: bytes

Write at the end: This article explores the most important concepts involved in kubernetes in command line practice: application deployment, load balancing, resilient scaling and roll-up, and acts as a command line. Readers can quickly get started by referencing the document, which will be mostly deployed as a yaml file and interact with kubernets.

Reference Documents

Basic concepts: https://kubernetes.io/docs/tutorials/kubernetes-basics/

Deploy applications: https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/

Access applications: https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/

External access: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/

Access applications: https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/

Rolling Upgrade: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/

When your talent can't sustain your ambition, you should settle down to study

Return kubernetes series tutorial catalog

Tags: Linux Nginx Kubernetes curl iptables

Posted on Thu, 16 Jan 2020 12:27:30 -0500 by Shaped