Introduction to Longhorn! Make persistent storage easy!

Introduction

In this article you will learn how to use k3s to run Longhorn on Civo. If you haven't used Civo, you can register on the official website (https://www.civo.com/) and apply for free use quota. First, you need a Kubernetes cluster, and then we'll install Longhorn and show you how to use it with an example.

One of the principles of cloud native applications is that they are designed to be stateless, so they can scale applications directly horizontally. However, the reality is that unless your website or application takes up a small amount of memory, you must store it somewhere.

Industry giants like Google and Amazon often have custom systems for scalable storage solutions for their products. But what about small companies?

Rancher Labs (hereinafter referred to as rancher), the creator of the most widely used Kubernetes management platform in the industry, released the container distributed storage project Longhorn (now donated to CNCF) in March 2018, which filled the above vacancy. In short, what Longhorn did was to use the existing disks of the Kubernetes node to provide stable storage for the Kubernetes Pod.

preparation in advance

Before we use Longhorn, you need to have a running Kubernetes cluster. You can simply install a k3s cluster (https://github.com/rancher/k3s/blob/master/README.md) or use Civo's Kubernetes service if you are using it. This article uses Civo's Kubernetes service to create clusters.

We recommend using a minimum number of Medium instances because we will test MySQL's state store, which can take up a lot of RAM.

$ civo k8s create longhorn-test --wait
Building new Kubernetes cluster longhorn-test: \
Created Kubernetes cluster longhorn-test

Your cluster needs to install open iSCSI on each node, so if you are not using civo's Kubernetes service, in addition to the above link instructions, you need to run the following commands on each node:

sudo apt-get install open-iscsi

Next, you need to download the Kubernetes configuration file and save it to ~ /. kube/config, and set the environment variable named KUBECONFIG to its file name:

cd ~/longhorn-play
civo k8s config longhorn-test > civo-longhorn-test-config
export KUBECONFIG=civo-longhorn-test-config

Install Longhorn

Installing Longhorn on an existing Kubernetes cluster takes only two steps: install the controller and the expansion pack for Longhorn, and then create a StorageClass that can be used for pod. Step one:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
namespace/longhorn-system created
serviceaccount/longhorn-service-account created
...

Creating a StorageClass requires another command, but as an additional step, you can set the new class to default so you don't have to specify it every time:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/examples/storageclass.yaml
storageclass.storage.k8s.io/longhorn created

$ kubectl get storageclass
NAME       PROVISIONER           AGE
longhorn   rancher.io/longhorn   3s

$ kubectl patch storageclass longhorn -p \
  '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    storageclass.storage.k8s.io/longhorn patched

$ kubectl get storageclass
NAME                 PROVISIONER           AGE
longhorn (default)   rancher.io/longhorn   72s

Visit Longhorn Dashboard

Longhorn has a very simple Dashboard where you can see the used space, available space, volume list, and so on. But first, we need to create the details of authentication:

$ htpasswd -c ./ing-auth admin
$ kubectl create secret generic longhorn-auth \
  --from-file ing-auth --namespace=longhorn-system

Now, we will create an Ingress object that can use the built-in Traefik in k3s and expose the dashboard to the outside. Create a file called longhorn-ingress.yaml and place it in it:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: longhorn-ingress
  annotations:
    ingress.kubernetes.io/auth-type: "basic"
    ingress.kubernetes.io/auth-secret: "longhorn-auth"
spec:
  rules:
  - host: longhorn-frontend.example.com
    http:
      paths:
      - backend:
          serviceName: longhorn-frontend
          servicePort: 80

Then apply it:

$ kubectl apply -f longhorn-ingress.yaml -n longhorn-system
ingress.extensions/longhorn-ingress created

Now, you need to add an entry in the / etc/hosts file to point any of your kubernets IP addresses to longhorn-frontend.example.com:

echo "1.2.3.4 longhorn-frontend.example.com" >> /etc/hosts

Now, you can visit http://longhorn-frontend.example.com on the browser. After using admin and the password entered when using htpasswd for authentication, you can see something similar to the following:

Installing MySQL with persistent storage

It is meaningless to run MySQL in a single container, because when the basic node (container) dies, the related business cannot run, and then you will lose customers and orders. Here, we will configure a new Longhorn persistent volume for it.

First, we need to create several resources in Kubernetes. Each of them is a yaml file in an empty directory, or you can put them all in one file, separated by -.

A persistent volume in mysql/pv.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  namespace: apps
  labels:
    name: mysql-data
    type: longhorn
spec:
  capacity:
    storage: 5G
  volumeMode: Filesystem
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  csi:
    driver: io.rancher.longhorn
    fsType: ext4
    volumeAttributes:
      numberOfReplicates: '2'
      staleReplicaTimeout: '20'
    volumeHandle: mysql-data

Declaration of the volume in mysql / pv-claim.yaml (similar to an abstract request so that someone can use the volume):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    type: longhorn
    app: example
spec:
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

In mysql/pod.yaml, there is another Pod that can run MySQL and use the above volume life (please note: we use password as the root password of MySQL here, but in practice, you should use the security password and store the password in Kubernetes secret instead of YAML, here we just want to be simple):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-mysql
  labels:
    app: example
spec:
  selector:
    matchLabels:
      app: example
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: example
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

Now, apply a folder or a single file (depending on your previous selection):

$ kubectl apply -f mysql.yaml
persistentvolumeclaim/mysql-pv-claim created
persistentvolume/mysql-pv created
deployment.apps/my-mysql created

# or

kubectl apply -f ./mysql/
persistentvolumeclaim/mysql-pv-claim created
persistentvolume/mysql-pv created
deployment.apps/my-mysql created

Test whether MySQL can persist storage

Our test is very simple. Create a new database, delete the container (Kubernetes will help us recreate), and then reconnect. The ideal result is that we can still see our new database.

Now let's create a database called should? Still? Be? Here:

$ kubectl get pods | grep mysql
my-mysql-d59b9487b-7g644   1/1     Running   0          2m28s
$ kubectl exec -it my-mysql-d59b9487b-7g644 /bin/bash
root@my-mysql-d59b9487b-7g644:/# mysql -u root -p mysql
Enter password: 
mysql> create database should_still_be_here;
Query OK, 1 row affected (0.00 sec)

mysql> show databases;
+----------------------+
| Database             |
+----------------------+
| information_schema   |
| #mysql50#lost+found  |
| mysql                |
| performance_schema   |
| should_still_be_here |
+----------------------+
5 rows in set (0.00 sec)

mysql> exit
Bye
root@my-mysql-d59b9487b-7g644:/# exit
exit

Now we will delete the container:

kubectl delete pod my-mysql-d59b9487b-7g644

After about a minute, we will look for a new container name again, connect to the container name, and see if our database still exists:

$ kubectl get pods | grep mysql
my-mysql-d59b9487b-8zsn2   1/1     Running   0          84s
$ kubectl exec -it my-mysql-d59b9487b-8zsn2 /bin/bash
root@my-mysql-d59b9487b-8zsn2:/# mysql -u root -p mysql
Enter password: 
mysql> show databases;
+----------------------+
| Database             |
+----------------------+
| information_schema   |
| #mysql50#lost+found  |
| mysql                |
| performance_schema   |
| should_still_be_here |
+----------------------+
5 rows in set (0.00 sec)

mysql> exit
Bye
root@my-mysql-d59b9487b-7g644:/# exit
exit

Great success! Our storage is persisted in the containers that were killed.

428 original articles published, 44 praised, 300000 visitors+
His message board follow

Tags: MySQL Kubernetes Database Google

Posted on Mon, 13 Jan 2020 22:12:00 -0500 by dwilson