kubernetes core combat --- StatefulSets

7,StatefulSets

StatefulSet is an API object used to manage workload of stateful applications.

Stateful set is used to manage Deployment and extend a group of pods, and can provide sequence number and uniqueness guarantee for these pods.

Like Deployment, stateful set manages a set of pods defined based on the same container. But unlike Deployment, stateful sets maintain a fixed ID for each of their pods. These pods are created based on the same declaration, but cannot be replaced with each other: no matter how scheduled, each Pod has a permanent ID.

StatefulSet and other controllers use the same operating mode. You define the desired state in the StatefulSet object, and then the StatefulSet controller will update to achieve the state you want.

Using StatefulSets

StatefulSets are valuable for applications that need to meet one or more of the following requirements:

Stable and unique network identifier. Stable and persistent storage. Orderly and elegant Deployment and scaling. Orderly and automatic rolling update. Above, stability means that the whole process of Pod scheduling or rescheduling is persistent. If the application does not need any stable identifiers or orderly Deployment, deletion, or scaling, you should deploy the application using the workload provided by a set of stateless replica controllers, such as Deployment or ReplicaSet, which may be more suitable for your stateless application Deployment needs.

limit

The storage of a given Pod must be provided by the PersistentVolume driver based on the requested storage class, or provided in advance by the administrator. Deleting or shrinking a StatefulSet does not delete its associated storage volumes. This is done to ensure data security, which is usually more valuable than automatically clearing all related resources of StatefulSet. StatefulSet currently requires headless service to be responsible for the network identification of Pod. You need to be responsible for creating this service. When StatefulSets are deleted, StatefulSets do not provide any guarantee to terminate the Pod. In order to realize the orderly and elegant termination of Pod in StatefulSet, the StatefulSet can be scaled to 0 before deletion. Using rolling updates when using the default Pod management policy (OrderedReady) may enter a damaged state that requires manual intervention to repair.

Example:
[root@k8s-master-node1 ~/yaml/test]# vim statefulsets.yaml
[root@k8s-master-node1 ~/yaml/test]# cat statefulsets.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc-0
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc-1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc-2
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-pvc-0
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-pvc-1
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-pvc-2

[root@k8s-master-node1 ~/yaml/test]#
Create statefulsets
[root@k8s-master-node1 ~/yaml/test]# kubectl  apply -f statefulsets.yaml 
service/nginx created
statefulset.apps/web created
[root@k8s-master-node1 ~/yaml/test]#
View pod
[root@k8s-master-node1 ~/yaml/test]# kubectl  get pod
NAME                                     READY   STATUS    RESTARTS   AGE
ingress-demo-app-694bf5d965-8rh7f        1/1     Running   0          67m
ingress-demo-app-694bf5d965-swkpb        1/1     Running   0          67m
nfs-client-provisioner-dc5789f74-5bznq   1/1     Running   0          52m
web-0                                    1/1     Running   0          93s
web-1                                    1/1     Running   0          85s
web-2                                    1/1     Running   0          66s
[root@k8s-master-node1 ~/yaml/test]#
View statefulsets
[root@k8s-master-node1 ~/yaml/test]# kubectl  get statefulsets.apps -o wide
NAME   READY   AGE    CONTAINERS   IMAGES
web    3/3     113s   nginx        nginx
[root@k8s-master-node1 ~/yaml/test]#

Note: the premise is to solve the dynamic allocation pv of kubernetes. Refer to the document: https://cloud.tencent.com/dev...

https://blog.csdn.net/qq_3392...

https://my.oschina.net/u/3981543

https://www.zhihu.com/people/...

https://segmentfault.com/u/hp...

https://juejin.cn/user/331578...

https://space.bilibili.com/35...

https://cloud.tencent.com/dev...

Zhihu, CSDN, open source China, Sifu, Nuggets, BiliBili, Tencent cloud

This article uses Article synchronization assistant synchronization

Tags: Docker Kubernetes Container Virtualization

Posted on Tue, 30 Nov 2021 00:22:38 -0500 by dreams4000