kubernetes Getting Started to Actual Guard and Task Controller --(01)

1. DaemonSet controller

Introduction to 1.1 DaemonSet

When introducing DaemonSet, let's first think about a question: I believe everyone has been exposed to monitoring systems such as zabbix. Monitoring systems need to install an agent on the monitored machine. Installing an agent usually involves the following scenarios:

  • All nodes must have agent s installed to collect monitoring data
  • Newly joined nodes require agent configuration, manual or scripting
  • Nodes need to be manually deleted in the monitoring system after they are offline

kubernetes often involves installing deployment applications on a node, how does it solve these problems?The answer is DaemonSet.The DaemonSet daemon process, referred to as DS, is suitable for running a daemon daemon process on all or part of the nodes, such as monitoring the network plug-ins kube-flannel and kube-proxy when we install and deploy. DaemonSet has the following characteristics:

  • DaemonSet ensures that all nodes run a Pod copy
  • Specifies that the node runs a Pod replica through a label selector or node affinity
  • Adding a new node automatically adds a Pod to the node
  • The garbage collection mechanism automatically cleans up Pod s when nodes are removed

The DaemonSet is suitable for scenarios where each node needs to deploy a daemon, such as:

  • Log collection agent s, such as fluentd or logstash
  • Monitor collection agents such as Prometheus Node Exporter,Sysdig Agent,Ganglia gmond
  • Distributed cluster components such as Ceph MON, Ceph OSD, glusterd, Hadoop Yarn NodeManager, and so on
  • k8s must run components such as network flannel, weave, calico, kube-proxy, etc.

When k8s is installed, two DaemonSet s, kube-flannel-ds-amd64 and kube-proxy, are installed by default in the kube-system namespace, responsible for the interoperability of flannel overlay networks and the implementation of service agents, respectively, and can be viewed with the following commands:

  1. Looking at the list of DaemonSet s in the kube-system command space, the current cluster has three node nodes, so each DS runs three copies of Pod
[root@node-1 ~]# kubectl get ds -n kube-system 
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                   AGE
kube-flannel-ds-amd64   3         3         3       3            3           beta.kubernetes.io/arch=amd64   46d
kube-proxy              3         3         3       3            3           beta.kubernetes.io/os=linux     46d
  1. Looking at a copy of Pod, you can see that DaemonSet runs one Pod per node

1.2 DaemonSet Definition

The DaemonSet definition is similar to the Deployment definition in that it defines the apiVersion, Kind, metadata, and spec attribute information, and there is no need to define the number of replicas in a spec.Spec.templateThat is, the template information defining the DS generation container is as follows: Run a daemon daemon daemon process for a fluentd-elasticsearch mirror container, run on each node and report to Elast via fluentd collection logIcSearch.

  1. Define a DaemonSet from a yaml file
[root@node-1 happycloudlab]# cat fluentd-es-daemonset.yaml 
apiVersion: apps/v1              #api version information
kind: DaemonSet                  #Type is DaemonSet
metadata:                        #Metadata Information
  name: fluentd-elasticsearch
  namespace: kube-system        #Running Namespace
  labels:
    k8s-app: fluentd-logging
spec:                          #DS Template
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:            #Container information
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:          #resource resources
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:      #Mount the store, and the agent needs to collect logs from these directories
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:            #Mount the host's directory in the container Pod as hostPath.
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

DaemonSet Definition Notes:

  • Daemonset.spec.templateDefine Pod's template information, including metadata information that needs to be consistent with selector
  • template must define a RestartPolicy policy with an Always cut policy value to automatically restart recovery in case of service exceptions
  • Pod runs on a specific node and supports specified scheduling strategies, such as nodeSelector, Node affinity, for flexible scheduling
  1. Generate DaemonSet
[root@node-1 happycloudlab]# kubectl apply -f fluentd-es-daemonset.yaml 
daemonset.apps/fluentd-elasticsearch created
  1. View the DaemonSet list
[root@node-1 happycloudlab]# kubectl get daemonsets -n kube-system  fluentd-elasticsearch 
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   3         3         3       3            3           <none>          16s
  1. Looking at running Pod on a node, you can see in the NODE column that each node has a Pod running
[root@node-1 happycloudlab]# kubectl get pods -n kube-system -o wide |grep fluentd 
fluentd-elasticsearch-blpqb      1/1     Running   0          3m7s   10.244.2.79      node-3   <none>           <none>
fluentd-elasticsearch-ksdlt      1/1     Running   0          3m7s   10.244.0.11      node-1   <none>           <none>
fluentd-elasticsearch-shtkh      1/1     Running   0          3m7s   10.244.1.64      node-2   <none>           <none>
  1. View the DaemonSet details to see that DaemonSet supports the RollingUpdate rolling update policy
[root@node-1 happycloudlab]# kubectl get daemonsets -n kube-system fluentd-elasticsearch -o yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"fluentd-logging"},"name":"fluentd-elasticsearch","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"name":"fluentd-elasticsearch"}},"template":{"metadata":{"labels":{"name":"fluentd-elasticsearch"}},"spec":{"containers":[{"image":"quay.io/fluentd_elasticsearch/fluentd:v2.5.2","name":"fluentd-elasticsearch","resources":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"200Mi"}},"volumeMounts":[{"mountPath":"/var/log","name":"varlog"},{"mountPath":"/var/lib/docker/containers","name":"varlibdockercontainers","readOnly":true}]}],"terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}],"volumes":[{"hostPath":{"path":"/var/log"},"name":"varlog"},{"hostPath":{"path":"/var/lib/docker/containers"},"name":"varlibdockercontainers"}]}}}}
  creationTimestamp: "2019-10-30T15:19:20Z"
  generation: 1
  labels:
    k8s-app: fluentd-logging
  name: fluentd-elasticsearch
  namespace: kube-system
  resourceVersion: "6046222"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/fluentd-elasticsearch
  uid: c2c02c48-9f93-48f3-9d6c-32bfa671db0e
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: fluentd-elasticsearch
    spec:
      containers:
      - image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        imagePullPolicy: IfNotPresent
        name: fluentd-elasticsearch
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always             #The restart policy must be Always to ensure automatic recovery in case of an exception
      schedulerName: default-scheduler  #Default Scheduling Policy
      securityContext: {}
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
  templateGeneration: 1
  updateStrategy:  #Rolling Update Policy
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberAvailable: 3
  numberMisscheduled: 0
  numberReady: 3
  observedGeneration: 1
  updatedNumberScheduled: 3

1.3 Rolling Update and Rollback

  1. Update Mirror to Latest Version
[root@node-1 ~]# kubectl set image daemonsets fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:latest -n kube-system
daemonset.extensions/fluentd-elasticsearch image updated
  1. View scrolling update status
[root@node-1 ~]# kubectl rollout status daemonset -n kube-system fluentd-elasticsearch 
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 of 3 updated pods are available...
daemon set "fluentd-elasticsearch" successfully rolled out
  1. Looking at the details of the DaemonSet, you can see the process of rolling DS updates: the DaemonSet deletes the pod on the node before it creates it

  1. View the DaemonSet rollover update, REVSION 1 is the initial version
[root@node-1 ~]# kubectl rollout history daemonset -n kube-system fluentd-elasticsearch 
daemonset.extensions/fluentd-elasticsearch 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
  1. Update fallback, which can be rolled back to the original version if the configuration does not meet expectations
[root@node-1 ~]# kubectl rollout undo daemonset -n kube-system fluentd-elasticsearch --to-revision=1
daemonset.extensions/fluentd-elasticsearch rolled back
  1. Confirm Version Fallback

  1. Watch the version fallback process, which is similar to rolling updates, deleting the Pod before creating it

  1. Delete DaemonSet
[root@node-1 ~]# kubectl delete daemonsets -n kube-system fluentd-elasticsearch 
daemonset.extensions "fluentd-elasticsearch" deleted
[root@node-1 ~]# kubectl get pods -n kube-system |grep fluentd
fluentd-elasticsearch-d6f6f      0/1     Terminating   0          110m

1.4 DaemonSet Scheduling

DaemonSet runs a copy of Pod on all nodes through the default scheduler of kubernetes, which runs on some nodes in three ways:

  • Specify the nodeName node to run
  • Run nodeSelector with labels
  • Scheduling node Affinity and node Anti-affinity through affinity

The DaemonSet scheduling algorithm is used to implement running a Pod on a specific node, as shown in the following example, dispatching a Pod to a node-2 on a partial node by node affinity.

  1. Add labels for node with app=web
[root@node-1 happycloudlab]# kubectl get nodes --show-labels 
NAME     STATUS   ROLES    AGE   VERSION   LABELS
node-1   Ready    master   47d   v1.15.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=
node-2   Ready    <none>   47d   v1.15.3   app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linux
node-3   Ready    <none>   47d   v1.15.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux
  1. Add node affinity scheduling algorithm, requiredDuringScheduling IgnoredDuringExecution settings meet basic requirements, preferredDuringScheduling IgnoredDuringExecution settings optimize the conditions
[root@node-1 happycloudlab]# cat fluentd-es-daemonset.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:  #Preferred Satisfaction
          - weight: 1
            preference:
              matchExpressions:
              - key: app 
                operator: In
                values:
                - web 
          requiredDuringSchedulingIgnoredDuringExecution:  #Requirements met
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - node-2
                - node-3
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
  1. Generate DS and view list
[root@node-1 happycloudlab]# kubectl delete ds -n kube-system fluentd-elasticsearch 
daemonset.extensions "fluentd-elasticsearch" deleted

[root@node-1 happylau]# kubectl get daemonsets -n kube-system fluentd-elasticsearch 
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   1         1         1       1            1           <none>          112s
  1. Verify that Pod is running, and DaemonSet's od is dispatched to the node-2 node
[root@node-1 happycloudlab]# kubectl get pods -n kube-system -o wide 
NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES          <none>
fluentd-elasticsearch-9kngs      1/1     Running   0          2m39s   10.244.1.82      node-2   <none>           <none>

Write at the end

This paper introduces the DaemonSet controller in kubernetes. The DS controller ensures that all nodes run a specific daemon daemon process. In addition, the Pod can be scheduled to a specific node through the nodeSelector or node Affinity.

Reference Documents

DaemonSet: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

Tags: Linux ElasticSearch Kubernetes Docker

Posted on Mon, 15 Jun 2020 15:38:21 -0400 by zman