k8s data persistence

Docker containers have a life cycle, so data volumes can make data persistent

The main problems that data volumes solve are:

  • Data persistence: When we write data, the files are temporary. When the container crashes, the host kills the container, then re-creates the container from the mirror, and the data is lost
  • Data sharing: There is a need to share files when running containers in the same Pod

Storage class is a kind of k8s resource type. It is a logical group that administrators create to manage PV more easily. It can be classified according to the performance of storage system, or integrated quality of service, backup policy and so on.But k8s does not know what the category is, it is used as a description.

One of the benefits of storage classes is that they support the dynamic creation of PVs. When users use persistent storage, it is very convenient to create PVC directly instead of creating PVs in advance.

The name of the storage class object is important, and in addition to the name, there are three key fields

Provisioner:

And a storage system that provides storage resources.k8s has built-in multiple providers whose names are prefixed with "kubernetes.io".It can also be customized.

Parameters: Storage classes use parameters to describe the volume to which they are associated, noting that different supplier parameters vary.

reclaimPolicy:PV's recycling policy, available values are Delete (default) and Retain

Volume:

emptyDir: less used, usually temporary, similar to Docker data persistence: docker manager volume, which is an empty directory when initially allocated. Containers in the same Pod can read and write to the directory and share data

Scenario: Share data volumes in the same Pod, in different containers

If the container is deleted, the data still exists, and if the Pod is deleted, the data is deleted

Example use:

[root@master ~]# vim emptyDir.yaml
apiVersion: v1
kind: Pod
metadata:
  name: producer-consumer
spec:
  containers:
  - image:  busybox
    name: producer
    volumeMounts:
    - mountPath:  /producer_dir  //Path inside container
      name: shared-volume  //Specify a local directory name
    args:
    - /bin/sh
    - -c
    - echo  "hello k8s" > /producer_dir/hello;  sleep 30000

  - image:  busybox
    name: consumer
    volumeMounts:
    - mountPath:  /consumer_dir
      name: shared-volume
    args:
    - /bin/sh
    - -c
    - cat /consumer_dir/hello;  sleep 30000

  volumes:
  - name: shared-volume  //The name here must correspond to the name of the mountPath of the OD above
    emptyDir: {}   //Define the type of data persistence that represents an empty directory
[root@master ~]# kubectl  applky   -f  emptyDir.yaml
[root@master ~]# kubectl  get  pod
NAME                READY   STATUS    RESTARTS   AGE  
producer-consumer   2/2     Running   0          14s
[root@master ~]# kubectl  logs  producer-consumer  consumer 
hello  k8s

//Use inspect to see where the mounted directory is (see Mount field)

[root@master ~]# kubectl  get  pod -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
producer-consumer   2/2     Running   0          69s   10.244.1.2   node01   <none>           <none>
//You can see that the container is running on node01, find it on node01 and view and view the details
[root@node01 ~]# docker ps
CONTAINER ID        IMAGE
f117beb235cf        busybox
13c7a18109a1        busybox
[root@node01 ~]# docker inspect 13c7a18109a1
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume",
                "Destination": "/producer_dir", //Mount directory inside container
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
//View another container
[root@node01 ~]# docker inspect f117beb235cf
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume",
                "Destination": "/consumer_dir",  //Mount directory inside container
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
//You can see the same mount directory used by both containers
[root@node01 ~]# cd  /var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume
[root@node01 shared-volume]# ls
hello
[root@node01 shared-volume]# cat hello 
hello  k8s

//Delete the container to verify that the directory exists

[root@node01 ~]# docker rm  -f  13c7a18109a1 
13c7a18109a1
[root@node01 ~]# docker ps
CONTAINER ID        IMAGE
a809717b1aa5        busybox
f117beb235cf        busybox
//It will regenerate a new container to the state expected by our users, so this directory still exists

//Delete Pod

[root@master ~]# kubectl  delete  pod producer-consumer
[root@master ~]# ls  /var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume
ls: cannot access/var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume: No file or directory
//Data will also be deleted after a Pod is deleted

hostPath Volume (less scenarios used): Docker-like data persistence: bind mount

Mount a file or directory on the file system of the Pod node into a container

If the Pod is deleted, the data will remain, which is better than emptyDir, but hostPath will not be accessible once the host crashes

The storage of the docker or k8s cluster itself is hostPath

There are many pod s in the k8s cluster, which is very inconvenient to manage if they are all hostPath Volume, so PV is used

Persistent Volume | PV (Persistent Volume) Prepared, Data Persistent Data Storage Directory

Is a storage space in a cluster managed by the cluster administrator or automatically managed by the Storage class. Like pod, deployment, and Service, PV is a resource object

PersistentVolume (PV) is a network store that has been configured by an administrator in the cluster.A resource in a cluster is like a node being a cluster resource.PV is a volume plug-in such as a volume, but it has a life cycle that is independent of any single pod using PV.This API object captures the implementation details of storage, that is, NFS, iSCSI, or cloud provider-specific storage systems

Psesistent Volume Claim | PVC (Statement of Persistent Volume Use | Application)

PVC represents a user's request to use storage and applies an application, declaration for PV persistence space.K8s clusters may have multiple PVs and you need to keep creating multiple PVs for different applications

It is similar to a pod.Pod consumes node resources and PVC consumes storage resources.A pod can request a specific level of resources (CPU and memory).Permission requirements can request specific sizes and access modes

The official document has a more detailed description: https://www.kubernetes.org.cn/pvpvcstorageclass

PV based on NFS service

[root@master ~]# Yum-y install nfs-utils (requires all nodes to download and will report mount type errors)
[root@master ~]# yum  -y  install  rpcbind
[root@master ~]# mkdir  /nfsdata
[root@master ~]# vim  /etc/exports
/nfsdata  *(rw,sync,no_root_squash)
[root@master ~]# systemctl  start  rpcbind
[root@master ~]# systemctl  start  nfs-server
[root@master ~]# showmount  -e
Export list for master:
/nfsdata *

1. Create PV (actual storage directory) 2. Create PVC 3. Create pod

Create a PV resource object:

[root@master ~]# vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec:
  capacity: //Size of PV capacity
    storage:  1Gi
  accessModes: //PV Supported Access Mode
    - ReadWriteOnce
  persistentVolumeReclaimPolicy:  Recycle //What is the storage space recycling strategy for PV s
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 192.168.1.70
[root@master ~]# kubectl  apply  -f  nfs-pv.yaml
[root@master ~]# kubectl  get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   1Gi        RWO            Recycle          Available           nfs                     9m30s

accessModes:(PV Supported Access Mode)

- ReadWriteOnce: Ability to mount to a single node in a read-write manner

- ReadWariteMany: Ability to mount to multiple nodes read-write

- ReadOnlyOnce: Ability to mount to a single node in a read-only manner

persistentVolumeReclaimPolicy: (What is the storage space recycling strategy for PV s)

Recycle: Clear data automatically

Retain: Manual recycling by administrator is required

Delete: Cloud storage dedicated.Delete data directly

Association between PV and PVC: through storageClassName & & accessModes

Create PVC

[root@master ~]# vim  nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:    //Access mode
    - ReadWriteOnce  
  resources:
    requests:
      storage:  1Gi   //Size of capacity requested
  storageClassName:  nfs    //Which PV to apply for
[root@master ~]# kubectl apply -f nfs-pvc.yaml
[root@master ~]# kubectl get pvc
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    test-pv   1Gi        RWO            nfs            14s

Application of PV:

Create a Pod resource:

[root@master ~]# vim  pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: pod1
    image:  busybox
    args:
    - /bin/sh
    - -c
    - sleep 30000
    volumeMounts:
    - mountPath:  "/mydata"
      name: mydata
  volumes:
    - name:  mydata
      persistentVolumeClaim:
        claimName:  test-pvc
[root@master ~]# kubectl  apply  -f  pod.yaml

The mount directory specified when the PV was previously created was / nfsdata/pv1. We did not create the PV1 directory, so this pod did not run successfully.

The following are the troubleshooting methods:

  1. kubectl describe
  2. kubectl logs
  3. /var/log/messages
  4. View the kubelet log for this node
//Use kubectl describe
[root@master ~]# kubectl  describe  pod  test-pod
mount.nfs: mounting 192.168.1.70:/nfsdata/pv1 failed, reason given by server: No such file or directory  //Tip has no file or directory

Create a directory and view the pod status:

[root@master ~]# mkdir  /nfsdata/pv1
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
test-pod   1/1     Running   0          12m   10.244.1.3   node01   <none>           <none>

Verify that the application was successful:

[root@master ~]# kubectl  exec  test-pod  touch /mydata/hello
[root@master ~]# ls  /nfsdata/pv1/
hello
[root@master ~]# echo  123  >  /nfsdata/pv1/hello 
[root@master ~]# kubectl  exec  test-pod  cat /mydata/hello
123

Delete Pod, verify Recycle Policy:

[root@master ~]# kubectl  delete  pod  test-pod
[root@master ~]# kubectl  delete  pvc test-pvc
[root@master ~]# kubectl  get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   1Gi        RWO            Recycle          Available           nfs                     42h
[root@master ~]# ls  /nfsdata/pv1/
[root@master ~]#
//Validation successful, data has been recycled

Normally it's not set to delete automatically, otherwise it's almost emptyDir

Delete pv and modify recycling policy:

Before you created PV--->PVC--->Pod, now adjust it to create PV--->--Pod--->PVC

[root@master ~]# vim  nfs-pv.yaml 
  persistentVolumeReclaimPolicy:  Retain
[root@master ~]# kubectl  apply  -f  nfs-pv.yaml 
[root@master ~]# kubectl  get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   1Gi        RWO            Retain           Available           nfs                     7s
[root@master ~]# kubectl  apply  -f  pod.yaml 
[root@master ~]# kubectl  get pod
NAME       READY   STATUS    RESTARTS   AGE
test-pod   0/1     Pending   0          5s  //Pending is being scheduled
[root@master ~]# kubectl  describe  pod test-pod
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  41s (x2 over 41s)  default-scheduler  persistentvolumeclaim "test-pvc" not found
//No corresponding pvc was found

//Create pvc
[root@master ~]# kubectl  apply  -f  nfs-pvc.yaml
[root@master ~]# kubectl  get pod
NAME       READY   STATUS    RESTARTS   AGE
test-pod   1/1     Running   0          114s

Verify the Retain recycling policy:

[root@master ~]# kubectl  exec test-pod  touch  /mydata/k8s
[root@master ~]# ls  /nfsdata/pv1/
k8s
[root@master ~]# kubectl  delete  pod test-pod 
[root@master ~]# kubectl  delete  pvc test-pvc
[root@master ~]# ls  /nfsdata/pv1/
k8s
//You can see that there is no recycling
[root@master ~]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   1Gi        RWO            Retain           Available           nfs                     6s

mysql for data persistence:

//No PV, no PVC, just use the previous one

[root@master ~]# kubectl  apply  -f  nfs-pvc.yaml 
[root@master ~]# kubectl  get pvc
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    test-pv   1Gi        RWO            nfs            7s

Create Deploment resource object, mysql container

[root@master ~]# vim mysql.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-mysql
spec:
  selector:
    matchLabels:  //Equivalent-based labels
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: 123.com
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-storage
        persistentVolumeClaim:
          claimName: test-pvc
[root@master ~]# kubectl  get deployments.
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
test-mysql   1/1     1            1           61s

Enter the container to create the data and verify that the PV is applied:

[root@master ~]# kubectl  get pod -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
test-mysql-569f8df4db-fnnxc   1/1     Running   0          32m   10.244.1.5   node01   <none>           <none>
[root@master ~]# kubectl  exec  -it  test-mysql-569f8df4db-fnnxc  --  mysql -u root -p123.com
mysql> create database yun33;  //Create a database
mysql> use yun33;  //Choose to use a data path
Database changed
mysql> create table my_id( id int(4));  Create Table
mysql> insert my_id values(9527);  //Insert data into a table
mysql> select * from my_id;  //View all data in the table
+------+
| id   |
+------+
| 9527 |
+------+
1 row in set (0.00 sec)
[root@master ~]# ls /nfsdata/pv1/
auto.cnf  ibdata1  ib_logfile0  ib_logfile1  k8s  mysql  performance_schema  yun33

Turn off node 01 to simulate node downtime:

[root@master ~]# kubectl get pod -o wide -w
NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
test-mysql-569f8df4db-fnnxc   1/1     Running   0          36m   10.244.1.5   node01   <none>           <none>
test-mysql-569f8df4db-fnnxc   1/1     Terminating   0          38m   10.244.1.5   node01   <none>           <none>
test-mysql-569f8df4db-2m5rd   0/1     Pending       0          0s    <none>       <none>   <none>           <none>
test-mysql-569f8df4db-2m5rd   0/1     Pending       0          0s    <none>       node02   <none>           <none>
test-mysql-569f8df4db-2m5rd   0/1     ContainerCreating   0          0s    <none>       node02   <none>           <none>
test-mysql-569f8df4db-2m5rd   1/1     Running             0          2s    10.244.2.4   node02   <none>           <none>
[root@master ~]# kubectl get pod -o wide 
NAME                          READY   STATUS        RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
test-mysql-569f8df4db-2m5rd   1/1     Running       0          20s   10.244.2.4   node02   <none>           <none>
test-mysql-569f8df4db-fnnxc   1/1     Terminating   0          38m   10.244.1.5   node01   <none>           <none>

Verify: the newly generated pod on node02 has any data we created inside it

[root@master ~]# kubectl  exec -it test-mysql-569f8df4db-2m5rd  -- mysql -u root -p123.com
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| yun33              |
+--------------------+
4 rows in set (0.01 sec)

mysql> use yun33;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-----------------+
| Tables_in_yun33 |
+-----------------+
| my_id           |
+-----------------+
1 row in set (0.01 sec)

mysql> select *  from my_id;
+------+
| id   |
+------+
| 9527 |
+------+
1 row in set (0.01 sec)
[root@master ~]# ls  /nfsdata/pv1/
auto.cnf  ibdata1  ib_logfile0  ib_logfile1  k8s  mysql  performance_schema  yun33

Tags: MySQL Docker Kubernetes vim

Posted on Sat, 08 Feb 2020 12:14:50 -0500 by subnet_rx