Learn about LVS load balancing cluster

catalogue

one, Cluster and distributed

1. Meaning of cluster

2. Distributed system

3. The difference between cluster and distributed

4. Cluster design

2, Introduction to Linux Virtual Server

1. Introduction to LVS

2. Terms of LVS cluster type

3, Working mode of LVS Cluster

1. LVS-NAT mode (NAT mode)

2. LVS-DR (direct routing)

3. LVS-TUN (tunnel mode)

4. Comparison of three modes of LVS

4, LVS-NAT deployment load balancing

1. Description of ipvsadm tool options

2. Deployment environment setting

3. Deploy load balancing

4. Configure client 12.0.0.10 test

1, Cluster and distributed

1. Meaning of cluster

Cluster: cluster, a single system formed by combining multiple computers to solve a specific problem

Trunking: trunking communication system is a mobile communication technology used for group dispatching command communication, which is mainly used in the field of professional mobile communication. The available channel of the system can be shared by all users of the system, and has the function of automatic channel selection. It is a multi-purpose and efficient wireless dispatching communication system that shares resources, costs, channel equipment and services.

Clustering: just as redundant components can protect you from hardware failure, clustering technology can protect you from the paralysis of the whole system and the failure of the operating system and application level. A set of server cluster includes multiple servers with shared data storage space, and each server is connected with each other through internal LAN; When one of the servers fails, its running application will be automatically taken over by the server connected to it; In most cases, all computers in the cluster have a common name, and any server in the cluster system can be used by all network users

There are multiple hosts, but the external performance is a whole

Load scheduling algorithm

MTBF:Mean Time Between Failure
MTTR:Mean Time To Restoration (repair)
A = MTBF /(MTBF+MTTR) (0,1): 99%,99.5%,99.9%,99.99%,99.999%

SLA: service level agreement. Is to provide security services at a certain cost
Service performance and availability, a mutually agreed agreement defined between the service provider and the user. Usually this overhead is driven to provide services
Main factors of service quality. In the conventional field, the so-called three nines and four nines are always set for representation, when this is not achieved
When there are three levels, there will be a series of punishment measures, and the main goal of operation and maintenance is to achieve this service level.

1 year = 365 days = 8760 hours
90 = (1-90%) * 365 = 36.5 days
99 = 8760 * 1% = 87.6 hours
99.9 = 8760 * 0.1% = 8760 * 0.001 = 8.76 hours
99.99 = 8760 * 0.0001 = 0.876 hours = 0.876 * 60 = 52.6 minutes
99.999 = 8760 * 0.00001 = 0.0876 hours = 0.0876 * 60 = 5.26 minutes
99.9999 = (1-99.9999%) * 365 * 24 * 60 * 60 = 31 seconds

Downtime is divided into two types, one is planned downtime and the other is unplanned downtime, while operation and maintenance mainly focuses on unplanned downtime.


Round Robin: distribute the received access requests to the nodes in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server.  

Weighted Round Robin: distribute requests according to the weight value set by the scheduler. Nodes with high weight value give priority to tasks and allocate more requests. This can ensure that nodes with high performance can undertake more requests.  

Least Connections: allocate according to the number of established connections of the real server, and give priority to the node with the least number of connections. If the performance of all server nodes is similar, this method can better balance the load.  

Weighted Least Connections: when the performance of server nodes varies greatly, the scheduler can automatically adjust the weight according to the node server load, and the node with higher weight will bear a larger proportion of the active connection load.  

IP_Hash calculates the hash according to the IP address of the request source to obtain the back-end server. In this way, requests from the same IP will always be processed on the same server, so that the request context information can be stored on this server,

url_hash allocates requests according to the hash results of the access URL, so that each URL is directed to the same back-end server, which is more effective when the back-end server is cache. I haven't studied it

fair does not use the rotation balancing algorithm used by the built-in load balancing, but can intelligently balance the load according to the page size and loading time. That is, user requests are allocated according to the time of the back-end server, and those with short response time are allocated first

2. Distributed system

Distributed storage: Ceph, GlusterFS, FastDFS, MogileFS

Distributed computing: hadoop, Spark

Distributed common applications

  • Distributed application services are divided according to functions and use microservices (a single application is divided into a group of small services, which coordinate and cooperate with each other to provide users with final value services)

  • Distributed static resources - static resources are placed on different storage clusters

  • Distributed data and storage -- using key value caching system

  • Distributed computing - use distributed computing for special services, such as Hadoop clusters

3. The difference between cluster and distributed

Cluster: the same business system is deployed on multiple servers. In the cluster, the functions implemented by each server are the same, and the data and code are the same.

Distributed: a service is split into multiple sub services, or it is a different service and deployed on multiple servers. In distributed, the functions implemented by each server are different, and the data and code are also different. The sum of the functions of each distributed server is the complete business.

Distributed improves efficiency by shortening the execution time of a single task, while cluster improves efficiency by increasing the number of tasks executed per unit time.

For large websites, there are many users. A cluster is implemented. A load balancing server is deployed in the front, and several servers in the back complete the same business. If a user accesses the corresponding service, the load balancer determines which server to complete the response according to the load of which server at the back end. When one server breaks down, other servers can top up. Each distributed node completes different services. If a node collapses, the service may fail

4. Cluster design

1) Design principles

Scalability - the ability of the cluster to scale horizontally

Availability - time between failures (SLA service level agreement)

Performance - access response time

Capacity - maximum concurrent throughput per unit time (C10K concurrency problem)

2) Design and Implementation

2.1 infrastructure level

  • Improve the performance of hardware resources - higher performance hardware resources are used from the entry firewall to the back-end web server

  • Multi domain name - DNS polling A record resolution

  • Multiple portals - resolve A record to multiple public IP portals

  • Multi computer room - same city + Remote Disaster Recovery

  • CDN(Content Delivery Network) - global load balancing based on GSLB(Global Server Load Balance), such as DNS

2.2 business level

  • Layering: security layer, load layer, static layer, dynamic layer, (CACHE layer, storage layer) persistence and non persistence

  • Segmentation: divide large business into small services based on function

  • Distributed: for businesses in special scenarios, distributed computing is used

2, Introduction to Linux Virtual Server

1. Introduction to LVS

LVS is the abbreviation of Linux Virtual Server, which means Linux Virtual Server. It is a virtual server cluster system. This project was founded by Dr. Zhang wensong in May 1998. It is one of the earliest free software projects in China. Ali's four layer SLB(Server Load Balance) is implemented based on LVS+keepalived

Working principle of LVS: VS forwards its scheduling to an RS according to the target IP, target protocol and port of the request message, and selects the RS according to the scheduling algorithm. LVS is a kernel level function. It works in the INPUT chain and "processes" the traffic sent to INPUT

  LVS function: the application scenario of load balancing is for high traffic services to improve the availability and reliability of applications.

Apply to businesses with high traffic: if your application has high traffic, you can distribute traffic to different ECS (Elastic Compute Service) instances by configuring listening rules. In addition, the session hold function can be used to forward requests from the same client to the same backend ECs. ECS instances can be added and removed at any time according to the needs of business development to expand the service capacity of the application system, which is applicable to various Web servers and App servers.

Eliminate single point of failure: you can add multiple ECS instances under the load balancing instance. When some ECS instances fail, load balancing will automatically shield the failed ECS instances and distribute the requests to the normally running ECS instances to ensure that the application system can still work normally.

Intra city disaster recovery (multi availability zone disaster recovery): in order to provide more stable and reliable load balancing services, Alibaba cloud load balancing has deployed multi availability zones in various regions to achieve intra regional disaster recovery. In case of machine room failure or unavailability in the primary availability zone, the load balancing still has the ability to switch to another standby availability zone in a very short time (e.g. about 30s interruption) to restore the service capability; When the primary availability zone is restored, load balancing will also automatically switch to the primary availability zone to provide services. When using load balancing, you can deploy load balancing instances in areas that support multiple availability zones to achieve local disaster recovery. In addition, it is recommended that you consider the deployment of back-end servers in combination with your own application needs. If you add at least one ECS instance to each zone, the efficiency of load balancing service under this deployment mode is the highest.

2. Terms of LVS cluster type

  • VS: Virtual Server, director server (DS), dispatcher (scheduler), Load Balancer (lvs server)

  • RS: Real Server(lvs), upstream server(nginx), backend server(haproxy) (real server)

  • CIP: Client IP

  • VIP: Virtual serve IP VS IP of Extranet

  • DIP: Director IP VS intranet IP

  • RIP: Real server IP

Access process: CIP < -- > VIP = = dip < -- > rip

3, Working mode of LVS Cluster

  • LVS NAT: modify the target IP of the request message and the DNAT of the multi-target IP

  • LVS Dr: manipulate and encapsulate the new MAC address (direct routing)

  • LVS Tun: tunnel mode

1. LVS-NAT mode (NAT mode)

LVS NAT: in essence, it is the DNAT of multi-target IP. By modifying the target address and target port in the request message to the RIP and port of RS somewhere

PORT implements forwarding

(1) RIP and DIP shall be on the same IP network, and private network address shall be used; The gateway of RS should point to DIP

(2) Both request message and response message must be forwarded through Director, which is easy to become the system bottleneck

(3) Support PORT mapping and modify the target PORT of the request message

(4) VS must be a Linux system, and RS can be any OS system

2. LVS-DR (direct routing)

 

Direct Routing: referred to as DR mode for short, it adopts a semi open network structure, which is similar to TUN

The structure of the mode is similar, but the nodes are not scattered everywhere, but are located in the same physical network as the scheduler.

The load scheduler is connected with each node server through the local network, and there is no need to establish a special IP tunnel

Direct routing, the default mode of LVS, is the most widely used. A MAC header is re encapsulated through a request message

For forwarding, the source MAC is the MAC of the interface where the DIP is located, and the target MAC is the MAC address of the interface where the RIP of a selected RS is located; source

IP/PORT and target IP/PORT remain unchanged

3. LVS-TUN (tunnel mode)

  1. RIP and DIP may not be in the same physical network. Generally, the gateway of RS cannot point to DIP, and RIP can communicate with the public network. That is, cluster nodes can be implemented across the Internet. DIP, VIP and RIP can be public network addresses.

  2. The VIP address needs to be configured on the channel interface of RealServer to receive the data packet forwarded by DIP and the source IP of the response.

  3. When forwarding DIP to RealServer, you need to use the tunnel. The source IP of the IP header in the outer layer of the tunnel is DIP, the target IP is RIP, and

    The IP header that RealServer responds to the client is obtained according to the analysis of the IP header in the tunnel. The source IP is VIP and the target IP is CIP

  4. The request message shall pass through the Director, but the response shall not pass through the Director. The response shall be completed by RealServer itself

  5. Port mapping is not supported

  6. The OS of RS must support tunnel function

Generally speaking, tunnel mode is often used to load scheduling Cache server groups. These Cache servers are generally placed in different network environments and can be turned back to the client nearby. When the request object is not hit locally by the Cache server, the Cache server sends a request to the source server, retrieves the result, and finally returns the result to the user.

4. Comparison of three modes of LVS

categoryNATTUNDR
advantagePort conversionWANBest performance
shortcomingPerformance bottleneckThe server supports tunnel modeCross segment is not supported
Real server requirementsanyTunnelingNon-arp device
Support networkPrivate (private network)LAN/WAN (private / public)LAN (private network)
Number of real serverslow (10~20)High (100)High (100)
Real server gatewaylvs intranet addressOwn router (network worker definition)Own router (network worker definition)

4, LVS-NAT deployment load balancing

1. Description of ipvsadm tool options

-A: Add virtual server
-D: Delete entire virtual server
-s: Specify the load scheduling algorithm (polling: rr, weighted polling: wrr, least connected: lc, weighted least connected: wlc)
-a: Add real server (node server)
-d: Delete a node
-t: Specify VIP address and TCP port
-r: Specify RIP address and TCP port
-m: Indicates that NAT cluster mode is used
-g: Indicates that DR mode is used
-i: Indicates that TUN mode is used
I w: set the weight (when the weight is 0, it means to pause the node)
-p 60: it means to keep the connection for 60 seconds
-l: View LVS virtual server list (view all by default)
-n: Displays address, port and other information in digital form, often combined with "- l" option. ipvsadm -ln

2. Deployment environment setting

Load scheduler: configure dual network card intranet: 192.168.18.100(ens33)   External network card: 12.0.0.1(ens37)
Cluster pool of two WEB servers: 192.168.18.90192.168.18.91
One NFS shared server: 192.168.18.109
Client: 12.0.0.10

3. Deploy load balancing

Deploying NFS shared server 192.168.18.109

Turn off firewall and setenforce

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0

Install NFS utils rpcbind

[root@localhost ~]# yum install -y nfs-utils rpcbind
 Plug in loaded: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 Package 1:nfs-utils-1.3.0-0.48.el7.x86_64 Installed and the latest version
 software package rpcbind-0.2.0-42.el7.x86_64 Installed and the latest version
 No treatment is required

Open service

[root@localhost ~]# systemctl start rpcbind  #Generally, it is opened first
[root@localhost ~]# systemctl start nfs      #Post opening based on rpcbind

Create site file

[root@localhost ~]# cd /opt     
[root@localhost opt]# ls
[root@localhost opt]# mkdir benet accp   #create folder
[root@localhost opt]# ls
accp  benet
[root@localhost opt]# chmod 777 accp/ benet/  #Give other users permission
[root@localhost opt]# echo "this is benet ">benet/index.html #Write site file
[root@localhost opt]# echo "this is accp ">accp/index.html   #Write site file
[root@localhost opt]# cat accp/index.html benet/index.html   #View site files
this is accp 
this is benet 

Set sharing policy

[root@localhost benet]# vim /etc/exports

/opt/benet 192.168.18.109/24(rw,sync)
/opt/accp 192.168.18.109/24(rw,sync)

~                                                                  
~                                                                  
~                                                                  
~  
: wq

Publish the service and restart the nfs service

[root@localhost opt]# exportfs -rv
exporting 192.168.18.109/24:/opt/accp
exporting 192.168.18.109/24:/opt/benet
[root@localhost opt]# systemctl restart nfs

Deploy node server

192.168.18.90 deployment

Turn off firewall and setenforce

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0

Install the httpd service and turn it on

[root@localhost ~]# yum install httpd -y
 Plug in loaded: fastestmirror, langpacks
12                                             | 3.6 kB     00:00   
123                                            | 3.6 kB     00:00   
Determining fastest mirrors
 software package httpd-2.4.6-67.el7.centos.x86_64 Installed and the latest version
 No treatment is required
[root@localhost ~]# systemctl start httpd.service

View nfs services

[root@localhost ~]# showmount -e 192.168.18.109
Export list for 192.168.18.109:
/opt/accp  192.168.18.109/24
/opt/benet 192.168.18.109/24

Mount the site file / opt/benet to / var/www/html

[root@localhost network-scripts]# vim /etc/fstab

#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=c804f3f5-9cbd-4caa-8c27-c7e25d804c83 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/sr0        /mnt            iso9660                  defaults        0 0 
192.168.18.109:/opt/benet /var/www/html/  nfs         defaults,_netdev 0 0 

  mount -a remount file view mount

[root@localhost network-scripts]# mount -a 
[root@localhost network-scripts]# df
 file system                     1K-block    Used     Available used% Mount point
/dev/mapper/centos-root   20961280 4696664 16264616   23% /
devtmpfs                   1000032       0  1000032    0% /dev
tmpfs                      1015952       0  1015952    0% /dev/shm
tmpfs                      1015952   25604   990348    3% /run
tmpfs                      1015952       0  1015952    0% /sys/fs/cgroup
/dev/sda1                  2086912  164008  1922904    8% /boot
tmpfs                       203192      12   203180    1% /run/user/42
tmpfs                       203192       0   203192    0% /run/user/0
/dev/sr0                   4414592 4414592        0  100% /mnt
192.168.18.109:/opt/benet 17811456 4592128 13219328   26% /var/www/html

Modify network card configuration file

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=23aa0806-6ee6-4374-a7f0-9bf4979160e9
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.18.90
NTEMASK=255.255.255.0
GATEWAY=192.168.18.100   #The gateway points to the intranet address of the dispatching server
DNS2=8.8.8.8


: wq

Restart network card service

[root@localhost network-scripts]# systemctl restart network

  192.168.18.91 deployment

Turn off firewall and setenforce

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0

Install the httpd service and turn it on

[root@localhost ~]# yum install httpd -y
 Plug in loaded: fastestmirror, langpacks
12                                             | 3.6 kB     00:00   
123                                            | 3.6 kB     00:00   
Determining fastest mirrors
 software package httpd-2.4.6-67.el7.centos.x86_64 Installed and the latest version
 No treatment is required
[root@localhost ~]# systemctl start httpd.service

View nfs services

[root@localhost ~]# showmount -e 192.168.18.109
Export list for 192.168.18.109:
/opt/accp  192.168.18.109/24
/opt/benet 192.168.18.109/24

Mount the site file / opt/accp to / var/www/html

[root@localhost network-scripts]# vim /etc/fstab

#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=c804f3f5-9cbd-4caa-8c27-c7e25d804c83 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/sr0        /mnt            iso9660                  defaults        0 0 
192.168.18.109:/opt/accp /var/www/html/  nfs         defaults,_netdev 0 0 

         Mount site files are different

  mount -a remount file view mount

[root@localhost network-scripts]# mount -a 
[root@localhost network-scripts]# df
 file system                     1K-block    Used     Available used% Mount point
/dev/mapper/centos-root   20961280 4696664 16264616   23% /
devtmpfs                   1000032       0  1000032    0% /dev
tmpfs                      1015952       0  1015952    0% /dev/shm
tmpfs                      1015952   25604   990348    3% /run
tmpfs                      1015952       0  1015952    0% /sys/fs/cgroup
/dev/sda1                  2086912  164008  1922904    8% /boot
tmpfs                       203192      12   203180    1% /run/user/42
tmpfs                       203192       0   203192    0% /run/user/0
/dev/sr0                   4414592 4414592        0  100% /mnt
192.168.18.109:/opt/benet 17811456 4592128 13219328   26% /var/www/html

Modify network card configuration file

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=23aa0806-6ee6-4374-a7f0-9bf4979160e9
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.18.90
NTEMASK=255.255.255.0
GATEWAY=192.168.18.100   #The gateway points to the intranet address of the dispatching server
DNS2=8.8.8.8


: wq

Restart network card service

[root@localhost network-scripts]# systemctl restart network

Deploy scheduling server 192.168.18.100

add adapter

 

Turn off firewall and setenforce

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0

View network card

View network card profile configuration ens37 network card profile

Before change

  After change

  Save exit restart network card service

[root@localhost network-scripts]# systemctl restart network

View network card

 

Turn on routing forwarding function

[root@localhost network-scripts]# vim /etc/sysctl.conf 

# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.ip_forward = 1
: wq
[root@localhost network-scripts]# sysctl -p  #see
net.ipv4.ip_forward = 1

View iptable policy

[root@localhost network-scripts]# iptables -nL -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination 

Add policy

[root@localhost network-scripts]# iptables -t nat -A POSTROUTING -s 192.168.18.0/24 -o enss37 -j SNAT --to 12.0.0.1


stay nat surface POSTROUTING Chain add 192.168.18.0 Network segment ens37 Forwarding 12.0.0.1

Check whether the policy is added successfully

[root@localhost network-scripts]# iptables -nL -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
SNAT       all  --  192.168.18.0/24      0.0.0.0/0            to:12.0.0.1

Add ip_vs module

[root@localhost network-scripts]# cat /proc/net/ip_vs  #View does not exist
cat: /proc/net/ip_vs: There is no such file or directory
[root@localhost network-scripts]# modprobe ip_vs   #add module
[root@localhost network-scripts]# cat /proc/net/ip_vs  #It needs to be added before it can be displayed
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn

Install ipvsadm tool

  Open service

[root@localhost network-scripts]# systemctl start ipvsadm.service 
Job for ipvsadm.service failed because the control process exited with error code. See "systemctl status ipvsadm.service" and "journalctl -xe" for details.
This is because there is no/etc/sysconfig/ipvsadm
[root@localhost network-scripts]# ipvsadm-save >/etc/sysconfig/ipvsadm  #Save profile
[root@localhost network-scripts]# systemctl start ipvsadm.service #You can turn it on now

Create a virtual server (Note: two network cards are required for NAT mode, and the address of the scheduler is the address of the external network interface)

(there are four most commonly used load scheduling algorithms for LVS: polling algorithm (rr), weighted polling (wrr), least polling (lc) and weighted least polling (wlc)) (option "- A" means adding A virtual server, "- t" is used to specify the VIP address and TCP port, and "- s" is used to specify the load scheduling algorithms - rr, wrr, lc and wlc)

[root@localhost network-scripts]# ipvsadm -C   #To prevent problems, you can clear the policy first
[root@localhost network-scripts]# ipvsadm -A -t 12.0.0.1:80 -s rr
 #Specify the IP address and the entry of the extranet -s specify the scheduling algorithm rr polling

Add server node

Specify the virtual server first, then add the real server address, - r: real server address, - m specify the nat mode

(option "- a" means to add a real server, "- t" is used to specify VIP address and TCP port, "- r" is used to specify RIP address and TCP port, "- m" means to use NAT cluster mode ("- g" is DR mode and "- i" is TUN mode)
{- m parameter can also be followed by - w parameter. The "- w" not done here is used to set the weight (when the weight is 0, it means to pause the node)})

[root@localhost network-scripts]# ipvsadm -a -t 12.0.0.1:80 -r 192.168.18.91:80 -m
[root@localhost network-scripts]# ipvsadm -a -t 12.0.0.1:80 -r 192.168.18.90:80 -m

start-up

[root@localhost network-scripts]# ipvsadm   #start-up
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  localhost.localdomain:http rr
  -> 192.168.18.90:http           Masq    1      0          0         
  -> 192.168.18.91:http           Masq    1      0          0         
[root@localhost network-scripts]# ipvsadm -ln  #View policy
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  12.0.0.1:80 rr
  -> 192.168.18.90:80             Masq    1      0          0         
  -> 192.168.18.91:80             Masq    1      0          0

Add: delete server node

ipvsadm -d -r 12.0.0.1:80 -t 192.168.18.90:80

/When a node needs to be deleted from the server pool, use the option '- d'. The target object must be specified to perform the deletion operation, including node address and virtual IP address. The operation shown above will delete node 192.168.18.90 in LVS cluster 12.0.0.1

If you need to delete the entire virtual server, you can use option - D and specify the virtual IP address without specifying a node. For example, "ipvsadm -D -t 12.0.0.1:80", delete the virtual server.

ipvsadm -L     // View the node status and add "- n" to display the address and port information in digital form
ipvsadm-save  > / etc/sysconfig/ipvsadm     // Save policy

Use the export / import tool ipvsadm save / ipvsadm restore to save and restore LVS policies. The method is similar to the export and import of iptables rules

4. Configure client 12.0.0.10 test

(take win7 as an example)

Configure network card

win+r open

 

Test connectivity

 

Web testing service load balancing

As can be seen from the web page test above, load balancing is realized.

Tags: Linux Operation & Maintenance Load Balance

Posted on Wed, 17 Nov 2021 06:49:58 -0500 by gple