024. Master Pod-Deployment MongoDB

1. Pre-preparation 1.1 Preconditions Cluster deployment: Kubernetes cluster deployment reference 003-019. glusterfs-Kubernetes Deployment: Refer to Gl...
1. Pre-preparation
2. Create a StatefulSet
3. Confirmation and Verification
4. Common Cluster Management

1. Pre-preparation

1.1 Preconditions

  • Cluster deployment: Kubernetes cluster deployment reference 003-019.
  • glusterfs-Kubernetes Deployment: Refer to GlusterFS Hyper-Fused Deployment with 010.Kubernetes Permanent Storage.

1.2 Deployment Planning

This experiment uses StatefulSet to deploy the MongoDB cluster, while each MongoDB instance uses glusterfs to achieve permanent storage.This deploys a single point-of-failure, highly available, and dynamically scalable MongoDB cluster. The deployment architecture is as follows:

2. Create a StatefulSet

2.1 Create storageclass storage type

1 [root@k8smaster01 ~]# vi heketi-secret.yaml #Create a secret to save the password 2 apiVersion: v1 3 kind: Secret 4 metadata: 5 name: heketi-secret 6 namespace: heketi 7 data: 8 # base64 encoded password. E.g.: echo -n "mypassword" | base64 9 key: YWRtaW4xMjM= 10 type: kubernetes.io/glusterfs
1 [root@k8smaster01 heketi]# kubectl create -f heketi-secret.yaml #Create heketi 2 [root@k8smaster01 heketi]# kubectl get secrets -n heketi 3 NAME TYPE DATA AGE 4 default-token-6n746 kubernetes.io/service-account-token 3 144m 5 heketi-config-secret Opaque 3 142m 6 heketi-secret kubernetes.io/glusterfs 1 3m1s 7 heketi-service-account-token-ljlkb kubernetes.io/service-account-token 3 143m 8 [root@k8smaster01 ~]# mkdir mongo 9 [root@k8smaster01 ~]# cd mongo
1 [root@k8smaster01 heketi]# vi storageclass-fast.yaml 2 apiVersion: storage.k8s.io/v1 3 kind: StorageClass 4 metadata: 5 name: fast 6 parameters: 7 resturl: "http://10.254.82.26:8080" 8 clusterid: "d96022e907f82045dcc426a752adc47c" 9 restauthenabled: "true" 10 restuser: "admin" 11 secretName: "heketi-secret" 12 secretNamespace: "default" 13 volumetype: "replicate:3" 14 provisioner: kubernetes.io/glusterfs 15 reclaimPolicy: Delete
1 [root@k8smaster01 heketi]# kubectl create -f storageclass-fast.yaml 2 [root@k8smaster01 heketi]# kubectl get storageclasses/fast

2.2 Authorize ServiceAccount

Step 2.4 of this experiment requires the use of mongo-sidecar pod to configure and manage mongo pod. Since the default service account can only get related attributes of the current Pod itself, it is not possible to observe related attribute information of other namespace Pods.If you want to extend a Pod, or if a Pod needs to be used to manage other Pods or other resource objects, you cannot get information about other Pods'attributes through the service account in its own namespace, so you need to create a service account manually and define it when you create a Pod.Or authorize the default service account directly.
1 [root@uk8s-m-01 mongo]# vi defaultaccout.yaml 2 --- 3 apiVersion: rbac.authorization.k8s.io/v1beta1 4 kind: ClusterRoleBinding 5 metadata: 6 name: DDefault-Cluster-Admin 7 subjects: 8 - kind: ServiceAccount 9 # Reference to upper's `metadata.name` 10 name: default 11 # Reference to upper's `metadata.namespace` 12 namespace: default 13 roleRef: 14 kind: ClusterRole 15 name: cluster-admin 16 apiGroup: rbac.authorization.k8s.io 17 18 [root@uk8s-m-01 mongo]# kubectl apply -f defaultaccout.yaml

2.3 Create headless Service

1 [root@k8smaster01 mongo]# vi mongo-headless-service.yaml
slightly Tip: This experiment directly combines headless with the same yaml file in StatefulSet, refer to 2.4.

2.4 Create a StatefulSet

1 [root@k8smaster01 mongo]# vi statefulset-mongo.yaml 2 --- 3 apiVersion: v1 4 kind: Service 5 metadata: 6 name: mongo 7 labels: 8 name: mongo 9 spec: 10 ports: 11 - port: 27017 12 targetPort: 27017 13 clusterIP: None 14 selector: 15 role: mongo 16 --- #The above is headless-service 17 apiVersion: apps/v1beta1 18 kind: StatefulSet 19 metadata: 20 name: mongo 21 spec: 22 serviceName: "mongo" 23 replicas: 3 24 template: 25 metadata: 26 labels: 27 role: mongo 28 environment: test 29 spec: 30 terminationGracePeriodSeconds: 10 31 containers: 32 - name: mongo 33 image: mongo:3.4 #The new version may not support the smallfiles parameter, so specify version 3.4 34 command: 35 - mongod 36 - "--replSet" 37 - rs0 38 - "--bind_ip" 39 - 0.0.0.0 40 - "--smallfiles" #Use smaller default files 41 - "--noprealloc" #Disable data file pre-allocation 42 ports: 43 - containerPort: 27017 44 volumeMounts: 45 - name: mongo-persistent-storage 46 mountPath: /data/db 47 - name: mongo-sidecar 48 image: cvallance/mongo-k8s-sidecar 49 env: 50 - name: MONGO_SIDECAR_POD_LABELS 51 value: "role=mongo,environment=test" 52 - name: KUBERNETES_MONGO_SERVICE_NAME 53 value: "mongo" 54 volumeClaimTemplates: 55 - metadata: 56 name: mongo-persistent-storage 57 annotations: 58 volume.beta.kubernetes.io/storage-class: "fast" 59 spec: 60 accessModes: [ "ReadWriteOnce" ] 61 resources: 62 requests: 63 storage: 2Gi
Interpretation:
  1. The StatefulSet defines two containers: mingo and mongo-sidecar.Mongo is the main service program, and mongo-sidecar is a tool for clustering multiple Mongo instances.The following environment variables are also set in mongo-sidecar:
    • MONGO_SIDECAR_POD_LABELS: Label set as the mongo container for sidecar to query the MongoDB cluster instance it manages.
    • KUBERNETES_MONGO_SERVICE_NAME: Its value is mongo, indicating that sidecar will use the service name Mongo to complete the setup of the MongoDB cluster.
  1. replicas=3 indicates that the MongoDB cluster consists of three mongo instances.
  2. VolumeeClaimTemplates is the most important storage setting for StatefulSet.Setting volume.beta.kubernetes.io/storage-class="fast" in the annotations section means that a StorageClass named fast is used to automatically allocate back-end storage for each mongo Pod instance.
  3. resources.requests.storage=2Gi means that 2GiB of disk space is allocated for each mongo instance.

1 [root@k8smaster01 mongo]# kubectl create -f statefulset-mongo.yaml #Create mongo
Tip: Since the domestic mongo mirror may not pull, it is recommended to pull the mirror in advance by VPN, and then upload it to all node s.
1 [root@VPN ~]# docker pull cvallance/mongo-k8s-sidecar:latest 2 [root@VPN ~]# docker pull mongo:3.4.4 3 [root@VPN ~]# docker save -o mongo-k8s-sidecar.tar cvallance/mongo-k8s-sidecar:latest 4 [root@VPN ~]# docker save -o mongo_3_4_4.tar mongo:3.4.4 5 [root@k8snode01 ~]# docker load -i mongo-k8s-sidecar.tar 6 [root@k8snode01 ~]# docker load -i mongo.tar 7 [root@k8snode01 ~]# docker images
Creating exceptions can be deleted and recreated as follows:
1 kubectl delete -f statefulset-mongo.yaml 2 kubectl delete -f mongo-headless-service.yaml 3 kubectl delete pvc -l role=mongo

3. Confirmation and Verification

3.1 View Resources

1 [root@k8smaster01 mongo]# kubectl get pod -l role=mongo #View cluster pod s 2 NAME READY STATUS RESTARTS AGE 3 mongo-0 2/2 Running 0 9m44s 4 mongo-1 2/2 Running 0 7m51s 5 mongo-2 2/2 Running 0 6m1s
StatefulSet creates a PVC instance for each Pod copy as defined in volumeClaimTemplates, and the name of each PVC is a combination of the name of volumeClaimTemplates in the StatefulSet definition and the name of the Pod copy.
1 [root@k8smaster01 mongo]# kubectl get pvc
1 [root@k8smaster01 mongo]# kubectl get pods mongo-0 -o yaml | grep -A 3 volumes #View mounts

3.2 View mongo clusters

Log in to any mongo Pod and use the rs.status() command on the mongo command line interface to view the status of the MongoDB cluster that has been created by the sidecar d.There are three nodes in the cluster. Each node's name is a network identity name in the DNS domain name format set by StatefulSet: mongo-0.mongo.default.svc.cluster.local mongo-1.mongo.default.svc.cluster.local mongo-2.mongo.default.svc.cluster.local At the same time, you can view the respective roles (PRIMARY or SECONDARY) of each mongo instance.
1 [root@k8smaster01 mongo]# kubectl exec -ti mongo-0 -- mongo 2 ...... 3 rs0:PRIMARY> rs.status()

4. Common Cluster Management

4.1 MongoDB Expansion

In the running environment, if three mongo instances are insufficient to meet the business requirements, the mongo cluster can be expanded.You only need to scale the StatefulSet to automatically add new mongo nodes to the mongo cluster.
1 [root@k8smaster01 ~]# kubectl scale statefulset mongo --replicas=4 #Expand capacity to 4 2 [root@k8smaster01 ~]# kubectl get pod -l role=mongo 3 NAME READY STATUS RESTARTS AGE 4 mongo-0 2/2 Running 0 105m 5 mongo-1 2/2 Running 0 103m 6 mongo-2 2/2 Running 0 101m 7 mongo-3 2/2 Running 0 50m

4.2 Viewing cluster members

1 [root@k8smaster01 mongo]# kubectl exec -ti mongo-0 -- mongo 2 ...... 3 rs0:PRIMARY> rs.status() 4 ......

4.3 Auto-recovery of failures

If a Mongo instance or its host fails while the system is running, the StatefulSet automatically rebuilds the Mongo instance and guarantees that its identity (ID) and data used (PVC) remain the same.Following is a simulation of a mongo-0 instance failure. StatefulSet will automatically rebuild the mongo-0 instance and mount the previously assigned PVC "mongo-persistent-storage-mongo-0".After the new service "mongo-0" is restarted, the data in the original database will not be lost and can be used again.
1 [root@k8smaster01 ~]# kubectl get pvc 2 [root@k8smaster01 ~]# kubectl delete pod mongo-0 3 [root@k8smaster01 mongo]# kubectl exec -ti mongo-0 -- mongo 4 ...... 5 rs0:PRIMARY> rs.status() 6 ......
Tip: Enter an instance to see the status of the Mongo cluster. Before mongo-0 fails, the role in the cluster is PRIMARY. After it leaves the cluster, the Mongo cluster will automatically select a SECONDARY node to promote to PRIMARY node (in this case, mongo-2).The restarted mongo-0 will become a new SECONDARY node.

24 November 2019, 22:42 | Views: 1472

Add new comment

For adding a comment, please log in
or create account

0 comments