1, Build environment
Centos 7 64 bit
MySQL 5.7.13
Docker 1.13.1
Next, we will install docker on a server and use docker to run three MySQL containers, one master and two slave.
2, 1. Install docker
yum install docker #If prompted, go y down
2. Start docker
systemctl start docker.service #Start service
3. Install mysql
docker pull mysql:5.7.13 #Pull MySQL image using Docker
4. Run the master container and two slave containers slave1 / slave2
# main docker run --name master -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7.13 # from docker run --name slave1 -p 3306:3307 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7.13 # from docker run --name slave2 -p 3306:3308 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7.13 # --Name specifies the name of the container. Here is master # -p map the specified port of the container to the specified port of the host. Here, map the 3306 port of the container to the 3306 port of the host # -e set the environment variable, where the password of the specified root account is root # -d run the container in the background and return the container ID # mysql:5.7.13 specify the version of mysql to run
5. Check whether the startup is successful
docker ps #Show all started containers docker ps -a #Displays all containers, including those that are not running
6. Open 3306 / 3307 / 3308 port number
firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --zone=public --add-port=3307/tcp --permanent firewall-cmd --zone=public --add-port=3308/tcp --permanent
7. Restart the firewall
firewall-cmd --reload
3, 1. Use Navicat connection test
2. Create a copy account of the master container
3. Execute SQL using Navicat friendly graphical interface
GRANT REPLICATION SLAVE ON *.* to 'backup'@'%' identified by 'backup'; show grants for 'backup'@'%';
4. The following information indicates successful authorization
4, Modify MySQL configuration environment
1. Create configuration file directory (mkdir + file name) The directory structure is as follows:
/usr/mysql/master /usr/mysql/slave1 /usr/mysql/slave2
2. Copy three MySQL configuration files respectively
docker cp master:/etc/mysql/my.cnf /usr/mysql/master/my.cnf docker cp slave1:/etc/mysql/my.cnf /usr/mysql/slave1/my.cnf docker cp slave2:/etc/mysql/my.cnf /usr/mysql/slave2/my.cnf
3. Enter the / master/slave1/slave2 directory respectively, and the copied my.cnf already exists
[root@administrator slave1]# ll total 4 -rw-r--r-- 1 root root 1801 May 10 10:27 my.cnf
4. Modify my.cnf, add it at the end of the [mysqld] node and save it
# Modify under master log-bin=mysql-bin server-id=1 # Modified under slave1 log-bin=mysql-bin server-id=2 # Modify under slave2 log-bin=mysql-bin server-id=3 # Log bin = MySQL bin uses binary logging, and MySQL bin is the prefix of the log file name # Server id = 1, unique server ID, non-zero integer, cannot be the same as the server ID of other servers
5. Overwrite the three modified files with the configuration files in MySQL in Docker
docker cp /usr/mysql/master/my.cnf master:/etc/mysql/my.cnf docker cp /usr/mysql/slave1/my.cnf slave1:/etc/mysql/my.cnf docker cp /usr/mysql/slave2/my.cnf slave2:/etc/mysql/my.cnf
6. Restart mysql docker to make the configuration effective
docker restart master docker restart slave1 docker restart slave2
5, Configure master-slave replication
1. After connecting slave1 with Navicat, create a new query and execute the following SQL
CHANGE MASTER TO MASTER_HOST='192.168.200.181', MASTER_PORT=3306, MASTER_USER='backup', MASTER_PASSWORD='backup'; # MASTER_HOST fill in the ip address in the Navicat connection configuration # MASTER_PORT port of the main container # MASTER_USER user name of the synchronization account # MASTER_PASSWORD the password of the synchronization account
2. Check whether the configuration is successful
show slave status;
Slave_ IO_ If the state is Waiting for master to send event, it is half successful. If it is Connecting to master, the configuration basically fails. It is recommended to recheck the configuration. See the log trace for the specific failure reason
[root@administrator master]# docker logs slave1 -f
Then, in the above, we said that half of the success did not say that it was successful, so the other half lies in Slave_IO_Running and Slave_SQL_Running
If Yes, congratulations. You can test the effect of master-slave replication. If one is not Yes and half is caused by transaction rollback after restarting the slave container, the solutions are as follows
stop slave ; set GLOBAL SQL_SLAVE_SKIP_COUNTER=1; start slave ;
After execution, it should be OK to observe the three key fields again
So far, one master and one slave have been built. The way of adding slave instances is the same as above, so I won't repeat it here.
6, Test master-slave replication
First, create a test database in the main instance
Open (refresh) the instance and you can see that the test library already exists
Create a table t in the test library_ Test, add an id test field
Add several data to the table
Refresh from library, visible t_ The test table and the data of 1, 2, 3 and 4 already exist
So far, a master-slave MySQL with master-slave replication has been built.
Create a table t in the test library_ Test, add an id test field
Add several data to the table
Refresh from library, visible t_ The test table and the data of 1, 2, 3 and 4 already exist
So far, a master-slave MySQL with master-slave replication has been built.