Mysql MGR multi master mode cluster deployment

mysql MGR cluster deployment

1. Deploy MySQL

Deploy MySQL on each node in the cluster

1.1 installation warehouse documents

yum install -y http://repo.mysql.com/mysql57-community-release-el7-8.noarch.rpm

1.2 YUM installs the specified version of MySQL

yum install -y mysql-community-server-5.7.34-1.el7.x86_64 \
mysql-community-common-5.7.34-1.el7.x86_64 \
mysql-community-client-5.7.34-1.el7.x86_64 \
mysql-community-libs-compat-5.7.34-1.el7.x86_64 \
mysql-community-libs-5.7.34-1.el7.x86_64

2 initialize the database

2.1 start service

systemctl start mysqld

2.2 setting root password

mysqladmin -uroot -p$(awk '/temporary password/ {print $NF}' /var/log/mysqld.log) password Own password

3. Configure MySQL

3.1 host name resolution

Each node in MGR must set the host name and resolve each other, or set DNS resolution.

If we currently have 3 servers, the information is as follows

host nameIP
node110.16.24.111
node210.16.24.112
node310.16.24.113

3.2 mysql configuration file

3.2.1 the contents of the startup group configuration file are as follows:

The MGR configuration that must be added is as follows

[mysqld]
# Cache of important hot data in MySQL. For dedicated servers, start with 70% of total RAM, otherwise 10%.
innodb_buffer_pool_size = 1024M

# Delete leading#To set options primarily for the report server.
# For transactions and quick queries, the server defaults to faster.
# Adjust the size as needed and test to find the best value.
join_buffer_size = 128M
sort_buffer_size = 2M
read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

relay-log-recovery=1
server_id=111
log-bin=mysql-bin
log-slave-updates=ON
binlog-format=row
binlog-checksum=NONE
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction-write-set-extraction=XXHASH64
slave_parallel_type=LOGICAL_CLOCK
slave_parallel_workers=4
slave_preserve_commit_order=1

transaction_write_set_extraction = XXHASH64
plugin_load_add='group_replication.so'

group_replication_group_name = 'ce9be252-2b71-11e6-b8f4-00212889f856'
group_replication_start_on_boot = off
group_replication_bootstrap_group = off
group_replication_local_address = 'node1:33061'
group_replication_group_seeds ='node1:33061,node2:33061,node13:33061'

# Turn off single master mode
group_replication_single_primary_mode = off

# The multi master mode needs to be turned on, and the operations that may bring risks by forced inspection will be directly rejected
group_replication_enforce_update_everywhere_checks = on
group_replication_ip_whitelist = 'node1,node2,node3'

3.2.2 introduction to configuration items

Configuration items that must be set for group replication in multi master mode

server_id=111

log-bin=mysql-bin
log-slave-updates=ON
binlog-format=row
binlog-checksum=NONE

gtid_mode=ON
enforce_gtid_consistency=ON

master_info_repository=TABLE
relay_log_info_repository=TABLE

transaction-write-set-extraction=XXHASH64
plugin_load_add='group_replication.so'

group_replication_group_name = 'ce9be252-2b71-11e6-b8f4-00212889f856'
group_replication_start_on_boot = off
group_replication_bootstrap_group = off

group_replication_local_address = 'node1:33061'
group_replication_group_seeds ='node1:33061,node2:33061,node13:33061'

# Turn off single master mode
group_replication_single_primary_mode = off

# The multi master mode needs to be turned on, and the operations that may bring risks by forced inspection will be directly rejected
group_replication_enforce_update_everywhere_checks = on

3.2.3 configuration files of other nodes in the cluster are as follows:

Only different places are shown here. Everything else is the same

node2 ⬇️

server_id=112
group_replication_local_address = 'node2:33061'

node3 ⬇️

server_id=113
group_replication_local_address = 'node3:33061'

4 start group node setting and start MGR

The operation here only needs to find one of them in the cluster as the first member in the cluster, which is called the startup group.

4.1 start group node user authorization

mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%' IDENTIFIED BY 'password';
mysql> FLUSH PRIVILEGES;

Or:

mysql> CREATE USER rpl_user@'%' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
mysql> FLUSH PRIVILEGES;

4.2 setting replication channels for startup group nodes

mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password'
FOR CHANNEL 'group_replication_recovery';

4.3 enable startup group and enable group replication in the startup group node

mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

4.4 in order to verify the data synchronization of the cluster, test the data in the scenario to see whether it can be synchronized to other members normally

CREATE DATABASE test;
CREATE TABLE test.t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL);
INSERT INTO test.t1 VALUES (1, 'shark');

5. Other cluster members start MGR

5.1 setting group copy channel connection information

CHANGE MASTER TO MASTER_USER='rpl_user', 
MASTER_PASSWORD='password'  FOR CHANNEL 'group_replication_recovery';

5.2 enable group replication

START GROUP_REPLICATION;

6. Verify the members of the group

SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE  |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 |   node1        |        3306 | ONLINE        |
| group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 |   node2        |        3306 | ONLINE        |
| group_replication_applier | fe90f134-6dfa-11e6-a69d-00212844f856 |   node2        |        3306 | ONLINE        |
+---------------------------+--------------------------------------+-------------+-------------+---------------+

6 troubleshooting

Log error output:

2021-08-01T21:29:39.162214Z 3 [ERROR] Plugin group_replication reported: 'Binlog must be enabled for Group Replication'

Reason: binlog log is not enabled

Solution: modify the configuration file and add the binlog configuration

log_bin=binlog

2 log error output:

2021-08-01T21:52:01.641762Z 10 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.'

Reason: DNS resolution is not set, and the server plug-in cannot resolve the host name

Solution: set DNS resolution or local DNS resolution (/ etc/hosts) for each server in the cluster

3 log error output:

2021-08-01T22:36:05.135368Z 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: bbc4f604-f317-11eb-9d94-5254009263a6:1 > Group transactions: 1d234308-f2cf-11eb-88d7-525400a538ee:1-2, 2d814f0c-fdd5-5e44-9edb-c003d40281fc:1-2'

reason:

The transaction gtid executed by the slave is inconsistent with that of the master. If it is just due to misoperation or some irrelevant data, you can ignore these transactions by setting globalgroup_replication_allow_local_disjoint_gtids_join = on; or clear the gtidexecuted table by resetting the master, and then reset the gtid purged parameter to be consistent with the gtid executed of the master. If these inconsistent data will cause problems, you can check the error data and synchronize it through Pt table sync, and then reset the gtid related parameters through reset master and other operations. It should be noted that this tool needs binlog format as statement to enable slave to execute the same check statement.

solve:

mysql> reset master;

Tags: Database MySQL server

Posted on Tue, 26 Oct 2021 21:51:47 -0400 by bsamson