LVS load balancing: DR mode + preserved

Article directory 1, keepalived (...
1, keepalived
2, Configuration steps:
(1) What is kept alive
(2) How keepalived works
Description of experimental environment:
Article directory

1, keepalived

(1) What is kept alive

(2) How keepalived works

2, Configuration steps:

Step 1: configure two DR S

Step 2: configure the first node server web1

Step 3: configure the second node server web2

Step 4: client test

Step 5: deploy keepalived

Step 6: verification of experimental results

1, Kept:

(1) What is kept alive

keepalived is a service software to ensure high availability of cluster in cluster management. Its function is similar to heartbeat to prevent single point of failure.

1. Three core modules of keepalived:

core module chech health monitoring vrrp virtual routing Redundancy Protocol

2. Three important functions of the preserved service:

Managing LVS Check LVS cluster node High availability as a system network service

(2) How keepalived works

1. keepalived is based on the VRRP protocol. The full name of VRRP is Virtual Router Redundancy Protocol, that is, virtual route redundancy protocol.

2. Virtual route redundancy protocol can be considered as a protocol to achieve high availability of routers, that is, N routers providing the same function form a router group, in which there is a master and multiple backups, and on the master there is a vip to provide external services (the default route of other machines in the LAN where the router is located is the vip), the master will send multicast, when the backup When the VRRP package is not received, the master is considered to be down. In this case, you need to select a backup as the master according to the priority of VRRP. In this way, the high availability of the router can be guaranteed.

3. Keepalived has three modules: core, check and VRRP. The core module is the core of keepalived, which is responsible for the start-up and maintenance of the main process as well as the loading and parsing of the global configuration file. Check is responsible for health inspection, including various common inspection methods. VRRP module is to implement VRRP protocol.

2, Configuration steps:

Description of experimental environment:

(1) Prepare four virtual machines, two for scheduling server and two for node server;

(2) LVS and keepalived are deployed in the scheduling server to realize load balancing and dual machine hot standby;

(3) The client host can access the Web page of the backstage Web server through the virtual ip address;

(4) The experimental results show that one DR is down, the access is normal, and all services are running as usual.

Role IP address Dispatching server DR1 (main) 192.168.100.201 Dispatching server DR2 (standby) 192.168.100.202 Node server web1 192.168.100.221 Node server web2 192.168.100.222 Virtual IP 192.168.100.10 Client test machine win7 192.168.100.50

Step 1: configure two DR S

(1) Install ipvsadm and preserved packages
yum install ipvsadm keepalived -y
(2) Modify the / etc/sysctl.conf file and add the following code:
net.ipv4.ip_forward=1 //proc response turns off redirection net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.ens33.send_redirects = 0
sysctl -p is the command for the above configuration to take effect (3) Configure virtual network card (ens33:0): 1. Note the path / etc / sysconfig / network scripts/ 2. Directly copy the existing network card information and modify it:
cp ifcfg-ens33 ifcfg-ens33:0 vim ifcfg-ens33:0 //Delete all the original information and add the following code: DEVICE=ens33:0 ONBOOT=yes IPADDR=192.168.100.10 NETMASK=255.255.255.0
3. Enable virtual network card:
ifup ens33:0

(4) Write service startup script, path: / etc/init.d

1. The vim dr.sh script is as follows:
#!/bin/bash GW=192.168.100.1 VIP=192.168.100.10 RIP1=192.168.100.221 RIP2=192.168.100.222 case "$1" in start) /sbin/ipvsadm --save > /etc/sysconfig/ipvsadm systemctl start ipvsadm /sbin/ifconfig ens33:0 $VIP broadcast $VIP netmask 255.255.255.255 broadcast $VIP up /sbin/route add -host $VIP dev ens33:0 /sbin/ipvsadm -A -t $VIP:80 -s rr /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g echo "ipvsadm starting------------------[ok]" ;; stop) /sbin/ipvsadm -C systemctl stop ipvsadm ifconfig ens33:0 down route del $VIP echo "ipvsamd stoped--------------------[ok]" ;; stop) /sbin/ipvsadm -C systemctl stop ipvsadm ifconfig ens33:0 down route del $VIP echo "ipvsamd stoped--------------------[ok]" ;; status) if [ ! -e ar/lock/subsys/ipvsadm ];then echo "ipvsadm stoped--------------------" exit 1 else echo "ipvsamd Runing-------------[ok]" fi ;; *) echo "Usage: $0 " exit 1 esac exit 0
2. Add permission, start script
chmod +x dr.sh service dr.sh start

(5) the second DR configuration is exactly the same as the first one.

Step 2: configure the first node server web1

(1) Install httpd
yum install httpd -y systemctl start httpd.service //Opening service
(2) Write a test web page on the site, which will be convenient to verify the test results later
Route:/var/www/html echo "this is accp web" > index.html
(3) Create a virtual network card 1. Path / etc / sysconfig / network scripts/ 2. Copy network card information to modify cp ifcfg-lo ifcfg-lo:0 3,vim ifcfg-lo:0 Delete all original information and add the following: DEVICE=lo:0 IPADDR=192.168.100.10 NETMASK=255.255.255.0 ONBOOT=yes (4) Write service startup script, path: / etc/init.d 1. The content of the vim web.sh script is as follows:
#!/bin/bash VIP=192.168.100.10 case "$1" in start) ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP /sbin/route add -host $VIP dev lo:0 echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce sysctl -p > /dev/null 2>&1 echo "RealServer Start OK " ;; stop) ifconfig lo:0 down route del $VIP /dev/null 2>&1 echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce echo "RealServer Stoped" ;; *) echo "Usage: $0 " exit 1 esac exit 0
2. Add permissions and execute
chmod +x web.sh //add permission service web.sh start //Startup service
(5) Open virtual network card
ifup lo:0
(6) Test whether the web page is normal

Step 3: configure the second node server web2

The second web is exactly the same as the first one. The only difference is that in order to distinguish the experimental results, the contents of the second test pages are changed.
Route:/var/www/html echo "this is benet web" > index.html
Test whether the web page is normal:

Step 4: client test

(1) Configure the IP address of the client

(2) Testing 1. Can it communicate with 192.168.100.10

2. Whether the web page is normal

Step 5: deploy keepalived

1, Deploy on the first DR: (1) Modify the keepalived.conf file to / etc/keepalived/ Modify the following:

(2) Start service
systemctl start keepalived.service

2, Deploy on the second DR:

(1) Modify the keepalived.conf file

(2) Start service
systemctl start keepalived.service

Step 6: verification of experimental results

Due to the deployment of LVS and keepalived, the purpose is load balancing and dual machine hot standby. At this time, let's simulate the failure and shut down one of the DR1. If the client can still communicate with the virtual IP address and can visit the website normally, then DR2 will work instead of DR1 to prevent single point failure. (1) Fault simulation: down DR1
ifdown ens33:0
(2) Result verification 1. ping the virtual ip on the client

2. The website is still accessible

5 February 2020, 06:11 | Views: 3962

Add new comment

For adding a comment, please log in
or create account

0 comments