Mysql High Availability Cluster - MHA

Article directory 1, MHA introduc...
1, MHA introduction
2, Deploy MHA
MHA(Master High Availability)
Article directory

1, MHA introduction

2, Deploy MHA

Step 1: install mysql on three primary and secondary servers

Step 2: modify the main configuration file of mysql / etc/my.cnf. Note that the server IDs of the three servers cannot be the same

Step 3: three servers start mysql service

Step 4: configure Mysql master-slave synchronization (one master and two slaves)

Step 5: install MHA

Step 6: start MHA

1, MHA introduction:

MHA(Master High Availability)

(1) At present, MySQL is a relatively mature solution in terms of high availability. It is developed by Japanese DeNA company youshimaton (now working for Facebook company), and it is an excellent high availability software for failover and master-slave improvement in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the database failover operation within 0-30 seconds, and in the process of failover, MHA can guarantee the consistency of data to the greatest extent, so as to achieve real high availability.

(2) The software consists of two parts: MHA Manager (management node) and MHA Node (data node). MHA manager can be deployed on a single machine to manage multiple master slave clusters, or on a slave node. The MHA Node runs on each MySQL server, and the MHA manager will regularly detect the master node in the cluster. When the master fails, it can automatically promote the slave of the latest data to the new master, and then point all other slave to the new master again. The entire failover process is completely transparent to the application.

(3) Working principle:

1. In the process of MHA automatic failover, MHA attempts to save binary logs from the down master server to ensure the data is not lost to the greatest extent, but this is not always feasible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs, only fails over and loses the latest data. Using MySQL 5.5 semi synchronous replication can greatly reduce the risk of data loss. MHA can be combined with semi synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, so it can ensure the data consistency of all nodes. 2. Sequence: Save binary log events from the crashed master; Identify the slave with the latest update; Apply different relay log s to other slave s; Apply the binary log events saved from the master; Upgrade a slave to a new master; Connect other slave to the new master for replication 2, Deploy MHA:

Packages that roles need to install

master(192.168.220.131) mha4mysql-node

slave1 (192.168.220.140) mha4mysql-node

slave2(192.168.220.136) mha4mysql-node

manager (192.168.220.170) mha4mysql-manager,mha4mysql-node

(1) Requirements:

In this case, it is required to monitor MySQL database through MHA to switch automatically in case of failure, without affecting the business.

(2) Ideas:

Install MySQL database Configure MySQL one master and two slaves Install MHA software Configure password free authentication Configure MySQL and MHA high availability Simulate master failover

(3) Operating system: Cent0S7 Version 6, MHA version 0.57

Step 1: install mysql on three primary and secondary servers (for MySQL version, use 5.6.36; for cmake version, use 2.8.6). Since the three installation operations are the same, the installation sequence on the master is shown here.
1,Install build dependency package: yum -y install ncurses-devel gcc-c++ perl-Module-Install 2,install gmake Compile the software (extract directly, and then./configure,gmake && gmake install It's OK) 3,install Mysql Database: //Decompression; //Compiling: cmake\ -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \ -DDEFAULT_CHARSET=utf8 \ -DDEFAULT_COLLATION=utf8_general_ci \ -DWITH_EXTRA_CHARSETS=all \ -DSYSCONFDIR=/etc //Make & & make install 4,Establish mysql User, and authorizes: groupadd mysql useradd -M -s /sbin/nologin mysql -g mysql chown -R mysql.mysql /usr/local/mysql 5,Set environment variables: cp support-files/my-default.cnf /etc/my.cnf cp support-files/mysql.server /etc/rc.d/init.d/mysqld chmod +x /etc/rc.d/init.d/mysqld chkconfig --add mysqld echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile source /etc/profile //Make environment variables effective

Step 2: modify the main configuration file of mysql / etc/my.cnf. Note that the server IDs of the three servers cannot be the same

---To configure the primary server: vim /etc/my.cnf [mysql] server-id = 1 log_bin = master-bin log-slave-updates = true ---Configure slave 1: vim /etc/my.cnf [mysql] server-id = 2 log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index ---Configure slave 2: vim /etc/my.cnf [mysql] server-id = 3 log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index

Step 3: three servers start mysql service

(1) To create a soft connection:
ln -s /usr/local/mysql/bin/mysql /usr/sbin/ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/
(2) Open service:
systemctl stop firewalld.service setenforce 0 /usr/local/mysql/bin/mysqld_safe --user=mysql & //Opening service [root@s01 mysql-5.6.36]# netstat -natp | grep 3306 / / check whether the port is developing normally tcp6 0 0 :::3306 :::* LISTEN 40105/mysqld

Step 4: configure Mysql master-slave synchronization (one master and two slaves)

(1) For mysql master-slave configuration, pay attention to authorization: Two users are authorized on all database nodes, one is myslave, which is used synchronously from the database, and the password is set to "123"; the other is manager, which uses the monitoring user mha, and the password is set to "manager".
mysql -uroot -p //Enter database mysql> grant replication slave on *.* to 'myslave'@'192.168.220.%' identified by '123'; mysql> grant all privileges on *.* to 'mha'@'192.168.220.%' identified by 'manager'; mysql> flush privileges; //Refresh database //The following three items are added as appropriate (not required in theory) and authorized by host name (in the form of host name when MHA checks): mysql> grant all privileges on *.* to 'mha'@'master' identified by 'manager'; mysql> grant all privileges on *.* to 'mha'@'slave1' identified by 'manager'; mysql> grant all privileges on *.* to 'mha'@'slave2' identified by 'manager';
(2) To view binaries and synchronization points on the Mysql main server:
show master status;

(3) Next, perform synchronization on slave1 and slave2 respectively:
slave1: change master to master_host='192.168.220.131',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1215; slave2: change master to master_host='192.168.220.131',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1215;
1. Open slave on two slave servers to check whether IO and SQL threads are all yes, indicating that synchronization is normal:
mysql> start slave; mysql> show slave status\G;

2. Two slave servers must be set to read-only mode:
mysql> set global read_only=1;

Step 5: install MHA

(1) The MHA dependent environment package must be installed on all servers. First, install the epel source:
yum install epel-release --nogpgcheck -y yum install -y perl-DBD-MYSQL \ perl-Config-Tiny \ perl-Log-Dispatch \ perl-Parallel-ForkManager \ perl-ExtUtils-CBuilder \ perl-ExtUtils-MakeMaker \ perl-CPAN
(2) The mode component must be installed on all servers. Finally, the manager component must be installed on the MHA manager node, because the manager depends on the node component. The following is an operation demonstration on the master to install the node component.
1,install node Components (required for all four servers): tar zvxf mha4mysql-node-0.57.tar.gz -C /opt/ cd /opt/mha4mysql-node-0.57/ perl Makefile.PL make && make install 2,manager Install on server manager Component (this only manager Required by server): tar zvxf mha4mysql-manager-0.57.tar.gz -C /opt/ cd /opt/mha4mysql-manager-0.57/ perl Makefile.PL make && make install
(3) After the manager server is installed, several tools will be generated under the usr/local/bin directory:

Master? Check? SSH: check the SSH configuration of MHA; Master "check" repl: check MYSQL replication status; Master a & U Manager: script to start manager; Master "check" status: check the running status of the current MHA; Master monitor: check whether the master is down; Master? Master? Switch: start failover (automatic or manual); Master? Conf? Host: add or delete the configured server information; Master a? Stop: close manager (4) At the same time, several tools will be generated under the usr/local/bin directory after node installation (these tools are usually triggered by the script of MHA manager and do not need to be operated manually):

Apply? Diff? Relay? Logs: identify different relay log events and apply them to other slave s; Save binary logs: save and copy the binary logs of the master; filter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool); Purge? Relay? Logs: clear the relay logs (SQL threads will not be blocked); (5) Configure password free authentication:
1,stay manager Password free authentication configured to all database nodes on: ssh-keygen -t rsa //Because there is no password to log in, just press enter (the key will appear) ssh-copy-id 192.168.220.131 ssh-copy-id 192.168.220.140 ssh-copy-id 192.168.220.136 //Enter "yes"; then enter the password 2,stay master Configure to database node on slave1 and slave2 Password free authentication for: ssh-keygen -t rsa ssh-copy-id 192.168.220.140 ssh-copy-id 192.168.220.136 //Enter "yes"; then enter the password 3,stay slave1 Configure to database node on master and slave2 Password free authentication for: ssh-keygen -t rsa ssh-copy-id 192.168.220.131 ssh-copy-id 192.168.220.136 //Enter "yes"; then enter the password 4,stay slave2 Configure to database node on master and slave1 Password free authentication for: ssh-keygen -t rsa ssh-copy-id 192.168.220.131 ssh-copy-id 192.168.220.140 //Enter "yes"; then enter the password
(6) Configure MHA: 1. Copy the relevant script on the manager node to the directory / usr/local/bin: cp -ra /opt/mha4mysql-manager-0.57/samples/scripts/ /usr/local/bin/ ls scripts/ Master? IP? Failover: script of VIP management during automatic switch; Master IP online change: management of VIP during online switch; Power manager: the script to shut down the host after the failure; send_report: script to send alarm after failover; Copy the script of VIP management during automatic switch to the directory / usr/local/bin / cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin/ 2. Rewrite the master IP failover script:
vim /usr/local/bin/master_ip_failover #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); #############################Add content section######################################### my $vip = '192.168.220.100'; my $brdc = '192.168.220.255'; my $ifdev = 'ens33'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; my $exit_code = 0; #my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;"; #my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"; ################################################################################## GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
3. Create the MHA software directory and copy the configuration file:
mkdir /etc/masterha cp /opt/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha/ //Edit profile: vim /etc/masterha/app1.cnf [server default] manager_log=/var/log/masterha/app1/manager.log manager_workdir=/var/log/masterha/app1 master_binlog_dir=/usr/local/mysql/data master_ip_failover_script=/usr/local/bin/master_ip_failover master_ip_online_change_script=/usr/local/bin/master_ip_online_change password=manager remote_workdir=/tmp repl_password=123 repl_user=myslave secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.220.140 -s 192.168.220.136 shutdown_script="" ssh_user=root user=mha [server1] hostname=192.168.220.131 port=3306 [server2] candidate_master=1 hostname=192.168.220.140 check_repl_delay=0 port=3306 [server3] hostname=192.168.220.136 port=3306
(7) Test ssh without password authentication. If it is normal, it will output successful at the end:
masterha_check_ssh -conf=/etc/masterha/app1.cnf

masterha_check_repl -conf=/etc/masterha/app1.cnf //Check health

(8) Note: for the first configuration, you need to go to the master and start the virtual IP manually
/sbin/ifconfig ens33:1 192.168.220.100/24

Step 6: start MHA

(1) Start MHA:
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
(2) Viewing the MHA status, you can see that the current master is mysql1 node:
masterha_check_status --conf=/etc/masterha/app1.cnf

(3) Fault simulation: 1. First, enable monitoring observation logging:
tailf /var/log/masterha/app1/manager.log
2. Now close the mysql service on the main database:
pkill -9 mysql
You can see the status of the slave database. vip will switch to one of the slave databases:

At this time, the client can also connect to the database through the virtual ip:
mysql -h 192.168.220.100 -u root -p
For example, now create a new library in the database:

1. We can see the new database created on the first slave server that becomes the new master database:

2. At the same time, due to the synchronous deployment of the master and slave, this new library can also be seen on the second slave server:

3 February 2020, 23:18 | Views: 9097

Add new comment

For adding a comment, please log in
or create account

0 comments