The introduction and application of k8s storage mode

k8s storage: (persistent)

docker containers have a lifecycle.
volume

1. Storage class is a kind of k8s resource type. It is a logical group created by administrators to manage PV more easily. It can be classified according to the performance of storage system, or comprehensive service quality, backup strategy, etc. But k8s itself doesn't know what a category is. It's a description.

2. One of the advantages of storage class is to support the dynamic creation of PV. When users use persistent storage, they do not need to create PV in advance, but directly create PVC, which is very convenient.

3. The name of the storage class object is very important. Besides the name, there are three key fields
Provider:
And provides a storage system for storage resources. k8s is built with multiple suppliers, whose names are prefixed with "kubernetes.io". It can also be customized.
Parameters: the storage class uses parameters to describe the storage volume to be associated with. Note that different provider parameters are also different.
Reclaimpolicy: the recycling policy of PV. The available values are delete (default) and Retain

brief introduction

1. Because the container itself is non persistent, some problems encountered in running applications in the container need to be solved. First, when the container crashes, kubelet will restart the container, but the files written to the container will be lost, and the container will restart in the initial state of the image; second, in a container running together in a Pod, it is usually necessary to share some files between containers. Kubernetes solves these two problems with storage volumes.

2. There is a concept volume of storage volume in Docker, but the storage volume in Docker is just a directory of disk or another container, and its life cycle is not managed. Kubernetes' storage volume has its own life cycle, which is consistent with its Pod life cycle. As a result, the storage volume will last longer than any of the containers running in the Pod and retain data when the container restarts. Of course, when the Pod stops, the storage volume no longer exists. In kubernetes, multiple types of volumes are supported, while Pod can use different types and any number of storage volumes at the same time. Use the storage volume in Pod by specifying the following fields:
spec.volumes: provides the specified storage volume through this field
spec.containers.volumeMounts: use this field to attach a storage volume to a container

Environment introduction

Host IP address service
master 192.168.1.21 k8s
node01 192.168.1.22 k8s
node02 192.168.1.23 k8s

1.emptyDir (empty directory): similar to docker data persistence: docker manager volume

Usage scenario: in the same Pod, different containers share data volumes.

If the container is deleted, the data still exists. If the Pod is deleted, the data will also be deleted.

<1> introduction

The first creation of an emptyDir is when a pod is assigned to a specific node, and it will always exist in the life cycle of the pod. Just like its name, its initialization is an empty directory, and the containers in the pod can read and write this directory, which can be hung in the same or different path to each container. When a pod is removed for any reason, the data will be permanently deleted. Note: a container crash will not cause data loss, because the container crash does not remove the pod

The usage scenario of emptyDir is as follows:

  • A blank initial space, such as in a merge / sort algorithm, that temporarily stores data on disk.
  • The checkpoint (intermediate result) is stored in the long-term calculation so that when the container crashes, it can continue from the last stored checkpoint (intermediate result), instead of starting from scratch.
  • As the shared storage of two containers, the first content management container can store the generated data in it, and a web server container provides these pages.
  • By default, emptyDir data volumes are stored on the node's storage media (mechanical hard disk, SSD, or network storage).

<2> Role of emptydir disk:

(1) General space, disk based data storage
(2) As a backup point for recovery from a crash
(3) Store data that needs to be kept for a long time, such as data in web Services
By default, emptyDir disks are stored on the media used by the host, either SSD or network hard disk, depending on your environment. Of course, we can also set the value of emptyDir.medium to Memory to tell Kubernetes to hang in a Memory based directory tmpfs, because
tmpfs is faster than hard disk, but when the host restarts, all data will be lost.

Write a test yaml file

[root@master yaml]# vim emptyDir.yaml
apiVersion: v1
kind: Pod
metadata:
  name: producer-consumer
spec:
  containers:
  - image: busybox
    name: producer
    volumeMounts:
    - mountPath: /producer_dir
      name: shared-volume
    args:
    - /bin/sh
    - -c
    - echo "hello k8s" &gt; /producer_dir/hello; sleep 30000
  - image: busybox
    name: consumer
    volumeMounts:
    - mountPath: /consumer_dir
      name: shared-volume
    args:
    - /bin/sh
    - -c
    - cat /consumer_dir/hello; sleep 30000
  volumes:
  - name: shared-volume
    emptyDir: {}

Execute it.

[root@master yaml]# kubectl apply -f emptyDir.yaml 

Check it out.

[root@master yaml]# kubectl get pod  

view log

[root@master yaml]# kubectl logs  producer-consumer producer
[root@master yaml]# kubectl logs  producer-consumer consumer

View the mounted directory

The node node looks at the container name and the mounted directory through the container name

[root@node01 shared-volume]# docker ps 

[root@node01 shared-volume]# docker inspect k8s_consumer_producer-consumer_default_9ec83f9e-e58b-4bf8-8e16-85b0f83febf9_0

Go to the mount directory to check

2.hostPath Volume: similar to docker data persistence: bind mount

<1> introduction

hostPath host path is to associate a directory of the host's file system on the host where the pod is located, which is outside of the container namespace in the pod, with the pod. When the pod is deleted, the storage data will not be lost.

Role of <2>

If the Pod is deleted, the data is retained, which is better than emptyDir. However, once host crashes, hostPath cannot be accessed.

The storage of docker or k8s cluster itself will adopt the method of hostPath.

Applicable scenarios are as follows:

A container needs to access Docker. You can use hostPath to mount / var/lib/docker of the host node
Run the C advisor in the container and use the hostPath to mount the / sys of the host node

3. Persistent volume| PV (persistent volume) is a data storage directory made in advance and with data persistence.

Psesistent volume claim| PVC

Persistent volume (PV) is a segment of networked storage in the cluster that has been configured by an administrator. Resources in a cluster are like a node, which is a cluster resource. PV is a volume plug-in such as a volume, but has a lifecycle independent of any single pod using PV. The API object captures the implementation details of storage, that is, NFS, iSCSI, or cloud provider specific storage systems.

Concept of PVC and PV

We mentioned earlier that kubernetes provides so many storage interfaces, but first of all, each Node of kubernetes can manage these storage, but various storage parameters also need professional storage engineers to understand, so our kubernetes management becomes more complex. So kubernetes put forward the concept of PV and PVC, so developers and users don't need to pay attention to the back-end storage and parameters. The following picture:

Persistent volume (PV) is a segment of networked storage in the cluster that has been configured by an administrator. Resources in a cluster are like a node, which is a cluster resource. PV is a volume plug-in such as a volume, but has a lifecycle independent of any single pod using PV. The API object captures the implementation details of storage, that is, NFS, iSCSI, or cloud provider specific storage systems.
k8s

Persistent volume claim (PVC) is a user stored request. Use logic of PVC: define a storage volume (the storage volume type is PVC) in pod. When defining, directly specify the size. PVC must establish a relationship with the corresponding pv. PVC will apply for pv according to the definition, and pv is created from the storage space. pv and PVC are storage resources abstracted by kubernetes.

Although persistent volume claims allows users to use abstract storage resources, the common requirement is that users need to create PV according to different requirements for different scenarios. At this time, the Cluster Administrator is required to provide PV with different requirements, not only the size and access mode of PV, but also the user does not need to understand the implementation details of these volumes. For such a requirement, the StorageClass resource can be used at this time. This has been mentioned earlier.

PV is the resource in the cluster. PVC is a request for these resources and also a claim check for resources. The interaction between PV and PVC follows this life cycle:

Provisioning - & gt; binding - & gt; using - & gt; releasing - & gt; recycling

(1) PV based on nfs service

NFS enables us to hang in the existing shared pod. Different from emptyDir, emptyDir will be deleted when our pod is deleted, but NFS will not be deleted, just to release the hang in state, which means that NFS can allow us to process the data in advance, and the data can be transferred between pods It can be read and written by multiple pods at the same time
Note: we must first ensure that the NFS server is running normally when we hang in NFS
Download the required installation package for nfs

[root@node02 ~]# yum -y install nfs-utils  rpcbind

Create shared directory

[root@master ~]# mkdir /nfsdata

Permission to create shared directory

[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)

Turn on nfs and rpcbind

[root@master ~]# systemctl start nfs-server.service 
[root@master ~]# systemctl start rpcbind

Test it.

[root@master ~]# showmount -e

<1> Create yaml file of NFS PV

[root@master yaml]# cd yaml/
[root@master yaml]# vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec:
  capacity:   #Size of pv capacity
    storage: 1Gi
  accessModes:  #Patterns for accessing pv
    - ReadWriteOnce #Can read-write mount to a single node
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 192.168.1.21
  Access modes: (PV supported access modes)
    -ReadWriteOnce: can read-write mount to a single node
    -ReadWriteMany: can read-write mount to multiple nodes.
  -ReadOnlyOnce: can mount to a single node read-only.
Persistent volume reclaim policy: (what is the reclaim policy for PV storage space)
  Recycle: automatically clear data.
  Retain: it needs to be recycled manually by the administrator.
  Delete: dedicated to cloud storage.

<2> Do it

[root@master yaml]# kubectl apply -f nfs-pv.yaml 

<3> Check it out

[root@master yaml]# kubectl get pv

<1> Create yaml file of NFS PVC

Persistent volume claim (PVC) is a user stored request. Use logic of PVC: define a storage volume (the storage volume type is PVC) in pod. When defining, directly specify the size. PVC must establish a relationship with the corresponding pv. PVC will apply for pv according to the definition, and pv is created from the storage space. pv and PVC are storage resources abstracted by kubernetes.

[root@master yaml]# vim nfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs

<2> Do it

[root@master yaml]# kubectl apply -f nfs-pvc.yaml 

<3> Check it out

[root@master yaml]# kubectl get pvc

[root@master yaml]# kubectl get pv

(2) Create a pod resource

[root@master yaml]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
    - name: pod1
      image: busybox
      args:
      - /bin/sh
      - -c
      - sleep 30000
      volumeMounts:
      - mountPath: "/mydata"
        name: mydata
  volumes:
    - name: mydata
      persistentVolumeClaim:
        claimName: test-pvc

<1> Do it

[root@master yaml]# kubectl apply -f pod.yaml 

<2> Check it out

[root@master yaml]# kubectl get pod -o wide

You can see that it has not been opened successfully

Check the test pod information to see what the problem is
[root@master yaml]# kubectl describe pod test-pod 

That's because the local mount directory of pv is not well created
[root@master yaml]# mkdir /nfsdata/pv1/
//The same name as nfs-pv.yaml
Recreate pod
[root@master yaml]# kubectl delete -f pod.yaml 
[root@master yaml]# kubectl apply -f pod.yaml 
[root@master yaml]# kubectl get pod -o wide

(3) Test pod create hello create file and add content

[root@master yaml]# kubectl exec test-pod touch /mydata/hello

Container entry

[root@master yaml]# kubectl exec -it test-pod  /bin/sh
/ # echo 123 > /mydata/hello
/ # exit

Check the mount directory

[root@master yaml]# cat  /nfsdata/pv1/hello 

It's just the same

(4) Test recovery strategy

Delete pod and pvc, pv

[root@master yaml]# kubectl delete pod test-pod 
[root@master yaml]# kubectl delete pvc test-pvc 
[root@master yaml]# kubectl delete pv test-pv 

Check it out.

[root@master yaml]# kubectl get pv

[root@master yaml]# cat  /nfsdata/pv1/hello

File has been recycled

(5) Modify the recycling policy of pv to manual

modify

[root@master yaml]# vim nfs-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec :
  capacity :
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain   #modify
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 192.168.1.21

Execute it.

[root@master yaml]# kubectl apply -f nfs-pv.yaml 

Create pod

[root@master yaml]# kubectl apply -f pod.yaml 

Check it out.

[root@master yaml]# kubectl describe pod test-pod 

Create pvc

[root@master yaml]# kubectl apply -f nfs-pvc.yaml 

Check out pod

[root@master yaml]# kubectl get pod

(6) Test pod create hello create file and add content

[root@master yaml]# kubectl exec test-pod touch /mydata/k8s

Check the mount directory

[root@master yaml]# ls /nfsdata/pv1/

Delete pod and pvc, pv, view the mount directory again

[root@master yaml]# kubectl delete pod test-pod 
[root@master yaml]# kubectl delete pvc test-pvc
[root@master yaml]# kubectl delete pv test-pv 

View mount directory

[root@master yaml]# ls /nfsdata/pv1/

Content is still

4. Application of Mysql to data persistence

The following steps show how to provide persistent storage for MySQL database:

  • Create PV and PVC.
  • Deploy MySQL.
  • Add data to MySQL.
  • Kubernetes will automatically migrate MySQL to other nodes in case of node failure.
  • Verify data consistency.

(1) Create pv and pvc through the previous yaml file

[root@master yaml]# kubectl apply -f  nfs-pv.yaml 
[root@master yaml]# kubectl apply -f  nfs-pvc.yaml 

Check it out.

[root@master yaml]# kubectl get pv

[root@master yaml]# kubectl get pvc

(2) Write a yaml file of mysql

[root@master yaml]# vim mysql.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-mysql
spec:
  selector:
    matchLabels:    #Label supporting equivalence
      app: mysql
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: 123.com
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-storage
        persistentVolumeClaim:
          claimName: test-pvc

Execute it.

[root@master yaml]# kubectl apply -f mysql.yaml 

Check it out.

[root@master yaml]# kubectl get pod

(3) Enter mysql container

① Switch to mysql database.
② Create the database table my ID.
③ Insert a piece of data.
④ Verify that the data has been written.
Turn off k8s-node2 and simulate the node failure.

[root@master yaml]# kubectl exec -it test-mysql-569f8df4db-rkpwm  -- mysql -u root -p123.com 

Create database

mysql> create database yun33;

Switch database

mysql> use yun33;

Create table

mysql> create table my_id( id int(4));

Insert data in table

mysql> insert my_id values(9527);

View table

mysql> select * from my_id;

(4) View local mount directory

[root@master yaml]# ls /nfsdata/pv1/

Check out pod

[root@master yaml]# kubectl get pod -o wide -w

Suspend node01

(5) Check whether the data on node02 is the same as before (verify the consistency of data)

Enter database

[root@master yaml]#  kubectl exec -it test-mysql-569f8df4db-nsdnz  -- mysql -u root -p123.com 

view the database

mysql> show databases;

View table

mysql> show tables;

mysql> select * from my_id;

You can see that the data is still there

5. Troubleshooting

kubectl describe
//Check the details to find out the problem
kubectl logs
//Check logs for problems
/var/ log/messages
//View the log of kubelet for this node.

6. summary

In this chapter, we discussed how Kubernetes manages storage resources.
Volume s of type emptyDir and hostPath are convenient, but they are not persistent. Kubernetes supports volumes of multiple external storage systems.
PV and PVC separate the responsibilities of administrators and ordinary users, and are more suitable for the production environment. We also learned how to achieve more efficient dynamic provisioning through StorageClass.
Finally, we demonstrated how to use PersistentVolume to achieve data persistence in MySQL.

1. Access control type of PV

Access modes: (PV supported access modes)

  • ReadWriteOnce: can read-write mount to a single node
  • ReadWriteMany: can read-write mount to multiple nodes.
  • ReadOnlyOnce: can mount to a single node read-only.

2. Space recovery strategy of PV

Persistent volume reclaim policy: (what is the reclaim policy for PV storage space)

Recycle: automatically clear data.
Retain: it needs to be recycled manually by the administrator.
Delete: dedicated to cloud storage.

3. Correlation between PV and PVC

Is associated with the storageClassName module through accessModes

Tags: MySQL Kubernetes Docker Database

Posted on Fri, 07 Feb 2020 06:50:25 -0500 by gotissues68