catalogue
Write in front
In this article, I will show you the experiments on pv and pvc storage in k8s, so as to help you further understand the storage part of k8s.
Theme of my blog: I hope everyone can make experiments with my blog, first do the experiments, and then understand the technical points in a deeper level in combination with theoretical knowledge, so as to have fun and motivation in learning. Moreover, the content steps of my blog are very complete. I also share the source code and the software used in the experiment. I hope I can make progress with you!
If you have any questions during the actual operation, you can contact me at any time to help you solve the problem for free:
-
Personal wechat QR Code: x2675263825 (shede), qq: 2675263825.
-
Personal blog address: www.onlyonexl.cn
-
Personal WeChat official account: cloud native architect real battle
-
Personal csdn
https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421
Introduction to basic knowledge
Experimental environment
Experimental environment: 1,win10,vmwrokstation Virtual machine; 2,k8s Cluster: 3 sets centos7.6 1810 Virtual machine, 1 master node,2 individual node node k8s version: v1.21 CONTAINER-RUNTIME: docker://20.10.7
Special note: nfs storage services should be pre configured here!
See another blog post on how to configure nfs storage classes:
https://www.onlyonexl.cn/2021/09/21/36%20%E5%AE%9E%E6%88%98%EF%BC%9A%E7%BD%91%E7%BB%9C%E5%AD%98%E5%82%A8%E5%8D%B7NFS%E5%AE%9E%E9%AA%8C%E6%BC%94%E7%A4%BA(%E6%88%90%E5%8A%9F%E6%B5%8B%E8%AF%95-%E5%8D%9A%E5%AE%A2%E8%BE%93%E5%87%BA)-20210921/
1. Copy the official sample code and modify it
Reference documents: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Official Code:
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: "nginx" replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "my-storage-class" resources: requests: storage: 1Gi
Create statful.yaml: (modified code)
[root@k8s-master ~]#vim stateful.yaml apiVersion: v1 kind: Service metadata: name: sts-nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None #Pay attention here selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: sts-web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: "sts-nginx" #Pay attention here replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] #Create a separate pv for each pod storageClassName: "managed-nfs-storage" resources: requests: storage: 1Gi
2. apply and view
[root@k8s-master ~]# kubectl apply -f stateful.yaml service/sts-nginx created statefulset.apps/sts-web created
Check PV and PVC:
[root@k8s-master ~]#kubectl get pv,pvc
3. Verify
1. Correspondence between sts number and pvc; sts number mark start sequence; sts number - host name;
Check PV and PVC:
[root@k8s-master ~]#kubectl get pv,pvc
2. Verify whether the next pod is shared storage? = > Each pod has its own unique storage.
Go to sts-web-0 pod, write a web0 file, store it at the back end, and observe the directory where the test file will appear:
Then create the test file web1.html in the sts-web-1 back-end storage directory, and then go to the other two back-end storage directories to see if it will appear in their directories? = > can't.
Then go to the sts-web1 pod website to see if web1.html exists? = > existence
Note the host name of the pod:
End of test.
3.sts number and network authentication
Create busybox test image:
[root@k8s-master ~]#kubectl run dns-test --image=busybox:1.28.4 -- sleep 24h pod/dns-test created
Delete all current pod s and clear the test environment:
[root@k8s-master ~]#kubectl delete -f . [root@k8s-master ~]#kubectl create deployment web --image=nginx #Start an nginx pod [root@k8s-master ~]#kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
Check that the label of the current pod is different and meets the needs of the experiment:
[root@k8s-master ~]#kubectl get pod --show-labels
Expand the web application to three copies:
[root@k8s-master ~]#kubectl scale deployment web --replicas=3 deployment.apps/web scaled
apply stateful.yaml:
[root@k8s-master ~]#kubectl apply -f stateful.yaml service/sts-nginx created statefulset.apps/sts-web created [root@k8s-master ~]#
Now it is equivalent to two applications: one web and one STS web, with three copies each.
[root@k8s-master ~]#kubectl get pod --show-labels
Let's now look at the differences between the two applications?
Now go to the DNS test test container and test with the nslookup command: the conclusion is as follows
This is the end of the experiment.
summary
Well, that's all for the stateful application deployment StatefulSet workload controller experiment. Thank you for reading. See you next time!