Deploying MySQL clusters in Kubernetes

Introduction:   MySQL is the most common and commonly used in stateful applications. In this article, we will actually deploy a group of multi slave MySQL clusters.


For image download, domain name resolution and time synchronization, please click   Alibaba open source mirror station

1, Configuration preparation

1. configMap

#application/mysql/mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only

configMap can decouple the configuration file from the image.
The above configuration means to create a master.cnf file. The configuration content is: log bin, that is, open the bin log for use by the master node.
Create a slave.cnf file. The configuration content is: Super read only. Set this node as read-only for standby nodes.

2. service

# application/mysql/mysql-services.yaml
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql

Create a headless service named mysql.
Create a service named MySQL read

3. StatefulSet

#application/mysql/mysql-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      # Set the initialization container and make some preparations
      initContainers:
      - name: init-mysql
        image: mysql:5.7
        # Configure the service ID for each MySQL node
        # If the node serial number is 0, the master configuration is used, and the other nodes use the slave configuration
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: gcr.io/google-samples/xtrabackup:1.0
        # Backup the data of the previous node for other nodes except the primary node with node serial number 0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: mysql:5.7
        # Settings support password free login
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          # Set the resources required to start the pod. 500m cpu and 1Gi memory are required in the official documents.
          # When I test locally, I report 1 insufficient CPU and 1 insufficient memory errors due to insufficient resources, so I make it smaller
          requests:
            # M means one thousandth, and 100m means 0.1 cpu
            cpu: 100m
            # Mi means megabytes and requires 100M memory
            memory: 100Mi
        livenessProbe:
          # Use the mysqladmin ping command to detect the activity of MySQL nodes
          # It starts 30 seconds after the node is deployed. It is detected every 10 seconds, and the timeout is 5 seconds
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          # Detect the node service availability, start 5 seconds after startup, detect every 2 seconds, and the timeout is 1 second
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: gcr.io/google-samples/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        # Start backup file verification, parsing and synchronization
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql
          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
            cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_slave_info xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm -f xtrabackup_binlog_info xtrabackup_slave_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi
          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
            echo "Initializing replication from clone position"
            mysql -h 127.0.0.1 \
                  -e "$(<change_master_to.sql.in), \
                          MASTER_HOST='mysql-0.mysql', \
                          MASTER_USER='root', \
                          MASTER_PASSWORD='', \
                          MASTER_CONNECT_RETRY=10; \
                        START SLAVE;" || exit 1
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
          fi
          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  # Set PVC
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi

The configuration and startup of master-slave nodes are defined in the yaml file above. Next, you need to create them one by one.

2, Create required resources

//Create configMap
kubectl apply -f configMap.yaml
//Create service
kubectl apply -f service.yaml
//Create statefulSet
kubectl apply -f statefulSet.yaml


After execution, you can use the following command to monitor the creation.

kubectl get pods --watch


3, Test master library

1. Enter the pod for operation

Enter pod mysql-0 to test

kubectl exec -it mysql-0 bash

2. Link mysql-0 with MySQL client

mysql -h mysql-0

3. Create libraries and tables

//Create database test
create database test;
//Using the test library
use test;
//Create message table
create table message (message varchar(50));
//View the message table structure
show create table message;

4. Insert data

//insert
insert into message value("hello aloofjr");
//see
select * from message;


4, Test standby database

1. Connect mysql-1

mysql -h mysql-1.mysql

2. View library and table structure

//View database list
show databases;
//Using the test library
use test;
//View table list
show tables;
//View the message table structure
show create table message;

3. Read data

//see
select * from message;

4. Write data

insert into message values("hello world");

Error 1290 (HY000): the MySQL server is running with the -- Super read only option so it cannot execute this statement
This is because mysql-1 is a read-only standby database and cannot be written.


5, Test MySQL read service

kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
  bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"

Query the database once every second. It can be observed that different server IDS, namely pod nodes, are dispatched


6, Expansion and contraction capacity

//Expand to 5 copies
kubectl scale statefulset mysql  --replicas=5
//Shrink to 2 copies only
kubectl scale statefulset mysql  --replicas=2

7, Clean up

kubectl delete statefulset mysql
kubectl delete configmap,service,pvc -l app=mysql

8, Summary

The above is the process of k8s deploying a one master multi slave mysql Cluster. There are several important knowledge points:

  • Configuration and mirroring can be decoupled through configMap
  • Use initContainers to initialize the pod before it starts
  • Set the cpu and memory required by pod through requests
  • Probe the activity of pod node through livenessProbe
  • pod availability detection through readnessProbe

The yaml files used in this article can be found in my GitHub repository AloofJr

This article is transferred from:   Deploying MySQL Cluster in Kubernetes - alicloud developer community

Tags: node.js Front-end Alibaba Cloud

Posted on Thu, 25 Nov 2021 21:15:07 -0500 by mayus