Stateful set (stateful service) implementation of k8s

Statefullset introduction Problems encountered: ...
1, example
1. Create NFS service based on NFS service.
2. Create rbac permissions.
3. Create a Deployment resource object and replace the real NFS service with Pod.
4. Create yaml file of storageclass
1. Create yaml file for statefullset
2. Verify the data store
(1) Create a StorageClass resource object.
(2) Solve automatic pvc creation
Statefullset introduction

Problems encountered:

The Pod created by Deployment is stateless. After a Volume is hung, if the Pod is hung, Replication Controller will run another to ensure availability. However, because the Pod is stateless, the relationship between the Pod and the previous Volume is broken when the Pod is hung, and the new Pod cannot find the previous Pod. But for users, they have no awareness of the underlying Pod hang, but when the Pod hang, they can no longer use the previously mounted disk.

Statefullset: a controller that provides a unique flag to the Pod, which ensures the order of deployment and expansion.

Pod consistency: including order (start, stop order), network consistency. This consistency is related to pod, regardless of which node is scheduled to.

Stable order: for the stateful set of N copies, each Pod is assigned a number sequence number within the range of [0, N), which is unique.

Stable network: the hostname mode of Pod is (statefullset name) - (sequence number).

Stable storage: create a PV for each Pod through VolumeClaimTemplate. Delete or reduce the replica. The related volume will not be deleted.

(1) RC, RS,Deployment,DS. ----->Stateless service

Template (template): the Pod created according to the template, the state of their J is the same (except the name, IP, domain name).

It can be understood that any Pod can be deleted and replaced with the newly generated Pod.

(2) Stateful service: it needs to record the relevant events in the previous or multiple communications as the classification standard of the following communications. For example, mysql and other database services. (the name of Pod cannot be changed at will. The directory of data persistence is also different. Each Pod has its own unique data persistence storage directory. )

mysql: master-slave relationship.

If the former stateless service is compared to cattle, sheep and other livestock, because these can be "delivered" at a certain time. Then, being in a state is a metaphor: pets, unlike animals, are not sent out when they arrive at a certain time, and people tend to take care of pets for a lifetime.

(3) Each pod --- > corresponds to a PVC --- > each PVC corresponds to a PV.

storageclass: automatically create PV

Need to solve: automatically create PVC.

Realization principle

Just like ReplicaSet and Deployment resources, statefullset is implemented in the way of controller. It is mainly managed by statefullsetcontroller, statefullsetcontrol and statefullpodcontrol. Statefullsetcontroller will accept add, delete and modify events from PodInformer and ReplicaSetInformer and push them to the queue at the same time :

The controller StatefulSetController will start multiple Goroutine collaborations in the Run method. These collaborations will obtain the StatefulSet resources to be processed from the queue for synchronization. Next, we will introduce the process of Kubernetes synchronizing StatefulSet.

1, example

(1) Create a yaml file of statefullset

[root@master yaml]# vim statefulset.yaml apiVersion: v1 kind: Service metadata: name: headless-svc labels: app: headless-svc spec: ports: - port: 80 selector: app: headless-pod clusterIP: None #No same ip --- apiVersion: apps/v1 kind: StatefulSet metadata: name: statefulset-test spec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: containers: - name: myhttpd image: httpd ports: - containerPort: 80

Deployment: deployment + RS + random string (name of Pod). )There is no order, but
With no alternative.

1. Headless SVC: headless service. Because there is no IP address, it has no load balancing function. Because statefullset requires that the names of pods are in order, and each Pod cannot be replaced at will, that is, even after the Pod is rebuilt, the names remain the same. Name each Pod in the backend.

2. Statefullset: defining specific applications

3. Volumeclaimt templates: PVC is automatically created to provide proprietary storage for the backend Pod.

Execute it.

[root@master yaml]# kubectl apply -f statefulset.yaml

Check it out.

[root@master yaml]# kubectl get svc

[root@master yaml]# kubectl get pod //You can see that these pod s are in order

1, Create a StorageClass resource object.

1. Create NFS service based on NFS service.

Download the required installation package for nfs

[root@node02 ~]# yum -y install nfs-utils rpcbind

Create shared directory

[root@master ~]# mkdir /nfsdata

Permission to create shared directory

[root@master ~]# vim /etc/exports /nfsdata *(rw,sync,no_root_squash)

Turn on nfs and rpcbind

[root@master ~]# systemctl start nfs-server.service [root@master ~]# systemctl start rpcbind

Test it.

[root@master ~]# showmount -e

2. Create rbac permissions.

[root@master yaml]# vim rbac-rolebind.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nfs-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nfs-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get","create","list", "watch","update"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: default #Write field roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io

Execute it.

[root@master yaml]# kubectl apply -f rbac-rolebind.yaml

3. Create a Deployment resource object and replace the real NFS service with Pod.

[root@master yaml]# vim nfs-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: bdqn - name: NFS_SERVER value: 192.168.1.21 - name: NFS_PATH value: /nfsdata volumes: - name: nfs-client-root nfs: server: 192.168.1.21 path: /nfsdata

Execute it.

[root@master yaml]# kubectl apply -f nfs-deployment.yaml

Check it out.

[root@master yaml]# kubectl get pod

4. Create yaml file of storageclass

[root@master yaml]# vim test-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: stateful-nfs provisioner: bdqn #Associate to the above deployment through the provider field reclaimPolicy: Retain

Execute it.

[root@master yaml]# kubectl apply -f test-storageclass.yaml

Check it out.

[root@master yaml]# kubectl get sc

2, Solve automatic pvc creation

1. Create yaml file for statefullset

[root@master yaml]# vim statefulset.yaml apiVersion: v1 kind: Service metadata: name: headless-svc labels: app: headless-svc spec: ports: - port: 80 name: myweb selector: app: headless-pod clusterIP: None --- apiVersion: apps/v1 kind: StatefulSet metadata: name: statefulset-test spec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: containers: - image: httpd name: myhttpd ports: - containerPort: 80 name: httpd volumeMounts: - mountPath: /mnt name: test volumeClaimTemplates: #>Automatically create PVC to provide proprietary storage for the backend Pod. * * - metadata: name: test annotations: #This is the specified storageclass volume.beta.kubernetes.io/storage-class: stateful-nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi

In this example:

  • A new one named headless SVC is created Service Object, indicated by the metadata: name field. The Service will locate an application named headless SVC, indicated by labels: app: headless SVC and selector: app: headless pod. The Service exposes port 80 and names it web. Moreover, the Service will control the domain and route the Internet traffic to the container application deployed by statefullset.
  • Using three copies of Pod (replicas: 3), a stateful set named web is created.
  • The Pod template (spec: template) indicates that its Pod is marked as app: headless Pod.
  • The Pod specification (template: spec) instructs the Pod of statefullset to run a container, myhttpd, which runs an httpd image. Container image is created by Container Registry Administration.
  • The Pod specification uses the web port opened by the Service.
  • template: spec: volumeMounts specifies a mountPath named test. mountPath is the path in the container where the storage volume should be mounted.
  • Statefullset pre configures a PersistentVolumeClaim: test.

Execute it.

[root@master yaml]# kubectl apply -f statefulset.yaml

Check it out.

[root@master yaml]# kubectl get pod

If there is a problem with the first pod, the subsequent pod will not be generated.

[root@master yaml]# kubectl get statefulsets

2. Verify the data store

Create file in container
[root@master yaml]# kubectl exec -it statefulset-test-0 /bin/sh # cd /mnt # touch testfile # exit
Host computer check
[root@master yaml]# ls /nfsdata/default-test-statefulset-test-0-pvc-bf1ae1d0-f496-4d69-b33b-39e8aa0a6e8d/ testfile
3, Small experiment

Create a namespace with your own name where all of the following resources run. Running an httpd web service with stateuse resource requires three pods, but the main interface content of each Pod is different, and proprietary data persistence is required. Try to delete one of the pods, view the newly generated Pod, and summarize the differences between the Pod controlled by the Deployment resource controller and the previous one?

(1) Create a StorageClass resource object.

Note: the nfs service should be enabled

1. Create a yaml file for namespace

[root@master yaml]# vim namespace.yaml kind: Namespace apiVersion: v1 metadata: name: xgp-lll #namespave name
Execute it.
[root@master yaml]# kubectl apply -f namespace.yaml
Check it out.
[root@master yaml]# kubectl get namespaces

2. Create rbac permission.

[root@master yaml]# vim rbac-rolebind.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner namespace: xgp-lll --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nfs-provisioner-runner namespace: xgp-lll rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get","create","list", "watch","update"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: xgp-lll roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io
Execute it.
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml

3. Create a Deployment resource object and replace the real NFS service with Pod.

[root@master yaml]# vim nfs-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-client-provisioner namespace: xgp-lll spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: xgp - name: NFS_SERVER value: 192.168.1.21 - name: NFS_PATH value: /nfsdata volumes: - name: nfs-client-root nfs: server: 192.168.1.21 path: /nfsdata
Execute it.
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
Check it out.
[root@master yaml]# kubectl get pod -n xgp-lll

4. Create yaml file of storageclass

[root@master yaml]# vim test-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: stateful-nfs namespace: xgp-lll provisioner: xgp #Associate to the above deployment through the provider field reclaimPolicy: Retain
Execute it.
[root@master yaml]# kubectl apply -f test-storageclass.yaml
Check it out.
[root@master yaml]# kubectl get sc -n xgp-lll

(2) Solve automatic pvc creation

1. Create yaml file for statefullset

apiVersion: v1 kind: Service metadata: name: headless-svc namespace: xgp-lll labels: app: headless-svc spec: ports: - port: 80 name: myweb selector: app: headless-pod apiVersion: apps/v1 kind: StatefulSet metadata: name: statefulset-test namespace: xgp-lll spec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: containers: - image: httpd name: myhttpd ports: - containerPort: 80 name: httpd volumeMounts: - mountPath: /usr/local/apache2/htdocs name: test volumeClaimTemplates: #>Automatically create PVC to provide proprietary > storage for the backend Pod. * * - metadata: name: test annotations: #This is the specified storageclass volume.beta.kubernetes.io/storage-class: stateful-nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi

Execute it.

[root@master yaml]# kubectl apply -f statefulset.yaml

Check it out.

[root@master yaml]# kubectl get pod -n xgp-lll

2. Verify the data store

Create file in container
First [root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-0 /bin/bash root@statefulset-test-0:/usr/local/apache2# echo 123 > /usr/local/apache2/htdocs/index.html //The second [root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-1 /bin/bash root@statefulset-test-2:/usr/local/apache2# echo 456 > /usr/local/apache2/htdocs/index.html //Third [root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-2 /bin/bash root@statefulset-test-1:/usr/local/apache2# echo 789 > /usr/local/apache2/htdocs/index.html
Host computer check
First [root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-0-pvc-ccaa02df-4721-4453-a6ec-4f2c928221d7/index.html 123 //The second [root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-1-pvc-88e60a58-97ea-4986-91d5-a3a6e907deac/index.html 456 //Third [root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-2-pvc-4eb2bbe2-63d2-431a-ba3e-b7b8d7e068d3/index.html 789
Visit

12 February 2020, 07:21 | Views: 8688

Add new comment

For adding a comment, please log in
or create account

0 comments