LVS load balancing cluster -- NAT mode

LVS load balancing cluster -- NAT mode

1: What clustering means

1. It is composed of multiple hosts, but it only shows as a whole

2. In the Internet application, with the increasing requirements of the site for hardware performance, response speed, service stability, data reliability and so on, a single server is unable to meet the requirements

  • resolvent:

  • Using expensive minicomputers and mainframes

  • Building a service cluster with a normal server

2: Classification of clusters

  • ####According to the target differences of the cluster, it can be divided into three categories:

  • Load Balance Cluster

In order to improve the response ability of the application system, handle more access requests as much as possible and reduce latency, the overall performance of high concurrency and high load (LB) is achieved;
The load distribution of LB depends on the shunting algorithm of the master node.

  • Highly available cluster

In order to improve the reliability of application system and reduce the interruption time as much as possible, ensure the continuity of service and achieve the fault-tolerant effect of high availability (HA);
HA works in two modes: duplex mode and master-slave mode.

  • High performance computing cluster

In order to improve the CPU operation speed, expand the hardware resources and analysis ability of the application system, the high performance computing (HPC) ability equivalent to that of the large-scale and supercomputer is obtained;
The high performance of high performance computing cluster depends on "distributed computing" and "parallel computing"
The special hardware and software integrate the CPU, memory and other resources of multiple servers together to realize the computing power that only large-scale and supercomputer can possess.

3: Load balancing cluster working mode

  • Load balancing cluster is the most commonly used cluster type in Enterprises

  • Three modes of cluster load scheduling technology

  • Address translation (NAT mode)

  • IP tunnel (TUN mode)

  • Direct routing (DR mode)

4: Load balancing cluster structure

First layer: load scheduler

  • It is only responsible for responding to the client's request and distributing the request to the server in the server pool through the load scheduling algorithm. It is the only access to the whole cluster. It uses the public vip (Virtual IP) address, also known as the cluster IP address

Layer 2: server pool

  • It is used to provide actual application services for clients. Each real server (the server in the server pool is called real server or node server) has independent RIP (real IP), and only processes the customer requests distributed by the scheduler

Layer 3: shared storage

  • Provide a stable and consistent file access service for all nodes in the server pool to ensure the consistency of cluster files (that is, even if you are not accessing the same node server, you will see the same content

5: Load scheduling algorithm of LVS

  • #####Round Robin

  • Distribute the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server

  • #####Weighted round robin

  • According to the real server's processing power, the received access requests are distributed in turn. The scheduler can automatically query the load of each node and dynamically adjust its weight

  • Ensure that the server with strong processing capacity bears more access traffic

  • #####Least Connections

  • According to the number of established connections of the real server, priority will be given to the received access requests
    Allocate nodes with the least number of connections

  • #####Weighted least connections

  • When the performance of the server nodes varies greatly, the weight can be adjusted automatically for the real server

  • Nodes with higher weight will bear a larger proportion of active connection load

6: LVS load balancing mechanism

  • LVS is a four layer load balancing, which is based on the fourth layer (transport layer) of OSI model. There is TCP/UDP on the transport layer. LVS supports TCP/UDP load balancing.
  • Because LVS is a four layer load balancing, its efficiency is very high compared with other high-level load balancing solutions, such as DNS domain name rotation resolution, application layer load scheduling, client scheduling and so on.

7: Experimental case

1. Experimental topology

2. Experimental environment

1 centos7 as LVS gateway (add a network card)

Two centos7 as web servers (web1, web2)

1 centos7 as NFS shared storage service (add 2 hard disks)

1 win7 as client

3. Purpose of the experiment

win7 client accesses 12.0.0.1 web address, through nat address conversion, polling access web1 and web2 host;

Set up the NFS network file storage service.

4. Experimental process

  • (configured on NFS storage server)

(1) You need to add two hard disks to the NFS storage server, and restart after adding them. You can enter the command LS / dev / to see if the addition is successful

Partition and format the two hard disks:
[root@nfs ~]# fdisk /dev/sdb 'partition to disk sdb'
[root@nfs ~]# Mkfs.xfs/dev/sdb1 'format'
[root@nfs ~]# fdisk /dev/sdc 'partition to disk sdc'
[root@nfs ~]# mkfs.xfs /dev/sdc1

(2) Create directory as mount point and mount
[root@nfs ~]# mkdir /opt/kg /opt/ac
[root@nfs ~]# vim /etc/fstab "add auto mount settings"
'Add 2 lines'
/dev/sdb1       /opt/kg         xfs     defaults        0 0
/dev/sdc1       /opt/ac         xfs     defaults        0 0

[root@nfs ~]# mount -a
[root@nfs ~]# df -hT

(3) Turn off the firewall and check whether the NFS related software exists
[root@nfs ~]# systemctl stop firewalld.service 
[root@nfs ~]# setenforce 0
[root@nfs ~]# RPM - Q nfs utils' nfs components installed '
nfs-utils-1.3.0-0.48.el7.x86_64
[root@nfs ~]# rpm -q rpcbind 
rpcbind-0.2.0-42.el7.x86_64      'Remote procedure call component installed'
(4) Set rules, edit shared profile
[root@nfs ~]# vim /etc/exports
'192.168.100.0 Share accessible addresses'
/opt/kg         192.168.100.0/24(rw,sync,no_root_squash)
/opt/ac         192.168.100.0/24(rw,sync,no_root_squash)
(5) Turn on the service and view the nfs share profile
[root@nfs ~]# systemctl start nfs
[root@nfs ~]# systemctl start rpcbind
[root@nfs ~]# showmount -e
Export list for nfs:
/opt/ac 192.168.100.0/24
/opt/kg 192.168.100.0/24
(6) Change the network card to host only mode and modify the IP address

[root@nfs ~]# VIM / etc / sysconfig / network scripts / ifcfg-ens33 "modify IP address"

[root@nfs ~]# service network restart
Restarting network (via systemctl):                        [  Determine  ]
[root@nfs ~]# ifconfig "view IP address"

  • (configured on web1 and web2 servers)

(the configuration of the two servers is the same)

(1) Install apache service, turn off firewall
[root@web1 ~]# yum install httpd -y
[root@web1 ~]# systemctl stop firewalld.service 
[root@web1 ~]# setenforce 0
(2) Set the network card to host only mode and modify the IP address
[root@web1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

[root@web1 ~]# service network restart 

[root@web2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 

[root@web2 ~]# service network restart 

(3) Verify that there is no problem with services for NFS
[root@web1 ~]# showmount -e 192.168.100.120
Export list for 192.168.100.120:
/opt/ac 192.168.100.0/24
/opt/kg 192.168.100.0/24          'Both servers need to be verified'

(4) Auto mount NFS shared directory to local
[root@web1 ~]# vim /etc/fstab
'Add mount settings at the end'
192.168.100.120:/opt/kg         /var/www/html   nfs     defaults,_netdev        0 0

[root@web2 ~]# vim /etc/fstab 
192.168.100.120:/opt/ac         /var/www/html   nfs     defaults,_netdev        0 0
 
[root@web1 ~]# mount -a 'make mount file effective'
[root@web1 ~]# df -hT 'view mount'

(5) Enter the home page and write the home page files to the two web servers respectively
[root@web1 ~]# cd /var/www/html/
[root@web1 html]# vim index.html
'Add to web1 Home page content'
<h1>this is kg web</h1>

[root@web1 html]# systemctl start httpd 'start service'
[root@web1 html]# netstat -ntap | grep 80
tcp6       0      0 :::80                   :::*                    LISTEN      6765/httpd 

[root@web2 ~]# cd /var/www/html/
[root@web2 html]# vim index.html
'Add to web2 Home page content'
<h1>this is ac web</h1>

[root@web2 html]# systemctl start httpd 'start service'
[root@web2 html]# netstat -ntap | grep 80
tcp6       0      0 :::80                   :::*                    LISTEN      4815/httpd 
  • (configured on LVS server)

Configure LVS load balancing

(1) Install ipvsadm service
[root@lvs ~]# yum install ipvsadm -y
(2) Add 1 network card and set it to host mode

(3) Modify IP address
[root@lvs ~]# cd /etc/sysconfig/network-scripts/
[root@lvs network-scripts]# Cp-p ifcfg-ens33 ifcfg-ens36 "copy one as another network card ens36"
[root@lvs network-scripts]# ls
ifcfg-ens33   ifdown-post       ifup-eth     ifup-sit
ifcfg-ens36    ......
[root@lvs network-scripts]# vim ifcfg-ens33

[root@lvs network-scripts]# vim ifcfg-ens36

[root@lvs network-scripts]# service network restart

(4) Verify in web server

(5) Turn on route forwarding function and set firewall rules
[root@lvs network-scripts]# vim /etc/sysctl.conf
'Add to end'
net.ipv4.ip_forward=1      'Start route conversion function'
[root@lvs ~]# sysctl -p 'start'

[root@lvs network-scripts]# Iptables-f "situation transfer"
[root@lvs network-scripts]# iptables -t nat -F 'clear nat address translation table'
[root@lvs network-scripts]# Iptables - t NAT - a posting - O ens36 - s 192.168.100.0/24 - J SNAT -- to source 12.0.0.1 'add address translation rule'

(6) Load the module and start the ipvsadm service
[root@lvs ~]# cd /etc/sysconfig/network-scripts/
[root@lvs network-scripts]# Modprobe IP vs' load '
[root@lvs network-scripts]# ipvsadm --save > /etc/sysconfig/ipvsadm
[root@lvs network-scripts]# systemctl start ipvsadm
(7) Add script to set LVS rule, weighted execution limit
[root@lvs network-scripts]# cd /opt/
[root@lvs opt]# vim nat.sh
'Add script, use polling algorithm to visit two websites'
#! /bin/bash
ipvsadm -C
ipvsadm -A -t 12.0.0.1:80 -s rr     'polling'
ipvsadm -a -t 12.0.0.1:80 -r 192.168.100.110:80 -m
ipvsadm -a -t 12.0.0.1:80 -r 192.168.100.111:80 -m   'mapping web Two servers'
ipvsadm

[root@lvs opt]# chmod +x nat.sh "give nat.sh script permission"
[root@lvs opt]# . / nat.sh 'execute script'
(8) Verify in win10

The external network client directly maps to the internal network web interface by visiting the gateway of the external network, and the internal network web is displayed by polling, that is, displaying the web1 interface once and the web2 interface once, which can effectively relieve the pressure of the web server. (if there is no change during the access, clear the cache)

Published 62 original articles, won praise 11, visited 2283
Private letter follow

Tags: network vim firewall iptables

Posted on Thu, 16 Jan 2020 06:23:58 -0500 by hughesa