Load balancing and high availability cluster implemented by packmaker and haproxy

pacemaker realizes high availability cluster:

The deployment of the two nodes is exactly the same
server3 - > node 1 - > haproxy - > pacemaker / corosync (heartbeat)
server4 - > node 2 - > haproxy - > pacemaker / corosync (heartbeat)
server3:
Install corosync heartbeat on node

yum install packmaker corosync -y
cd /etc/corosync/
cp corosync.conf.example corosync.conf
vim corosync.conf
totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 172.25.30.0 #subnet
                mcastaddr: 226.94.1.30 #Broadcast address modification everyone is different or there will be confusion
                mcastport: 5435 #Modify broadcast address port
                ttl: 1 #Lifetime value
        }
}
service {
        name: pacemaker
        var: 0 #To start a service, you only need to open one. If it's 1, you need to open corosync and the pacemaker service
}

/etc/init.d/corosync start
scp corosync.conf server4:/etc/corosync/ #Two nodes must ensure that the broadcast address and port are consistent



Install interactive management software

yum install pssh-2.3.1-2.1.x86_64.rpm crmsh-1.2.6-0.rc2.2.1.x86_64.rpm -y
#Install interactive management software
crm_verfiy -VL #Check the syntax error and find that there is no fence advanced group detection. If there are only two nodes, it will not form a cluster and also report an error. Therefore, we need to turn off fence and ignore detection

Test two node states:

crm_mon #Before that, check whether the heartbeat between the two nodes remains connected
[root@server3 corosync]# crm_verify -VL
crm(live)configure# property stonith-enabled=false
crm(live)configure# primitive vip ocf:heartbeat:IPad
dr2 params ip=172.25.30.100 cidr_netmask=24 op monitor interval=1min
//When one of the two nodes fails, it is not enough to cluster. Com mon will find that the node state is not online
crm(live)configure# property no-quorum-policy=ignore
//Add haproxy resource
crm(live)configure# primitive haproxy lsb:haproxy op monitor interval=1min
//Because haproxy may appear in the node1 VIP There will be resource separation at another node, so we need to bind them in a group
crm(live)configure# group hagroup vip haproxy 
standby Let the node rest the dishes, who will execute and who will rest the dishes
crm node standby
online Let the changed node restart the service
crm node online

[root@server3 corosync]# crm_verify -VL


Start fence service on the physical machine: pass fence [XVM. Key to the node / etc/cluster

systemctl start fence_virtd.service
cd /etc/cluster/
scp fence_xvm.key root@172.25.30.3:/etc/cluster/

Install fence on node:

yum install /usr/sbin/fence_xvm
mkdir /etc/cluster
ls /etc/cluster/ #Check if there is fence [XVM. Key
 stonith_admin -I
stonith_admin -M -a fence_xvm #Activate fence service
crm(live)configure# property stonith-enabled=true #Open fence in cluster
crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map="server3:test3;server4:test4" op monitor interval=1min #Add fence service pcmk host map node mapping monitor monitoring interval=1min. Monitor every other minute. If any node breaks down, it will explode directly
fence It's serving and VIP On which side? fence On the opposite side of him
//Test when a node trash fails (execute echo c > /etc/sysrq_trigger Kernel crash)
fence It will run to the broken node
fence_xvm -H test4 #test1 makes test4 restart and power off

CRM monitor node working status

Turn on the heartbeat service after restart to establish the connection between nodes
/ect/init.d/corosync start
crm node online

All node configurations:

Tags: yum RPM REST vim

Posted on Thu, 09 Jan 2020 12:58:31 -0500 by Ali25m