1. It is composed of multiple hosts, but it only shows as a whole
2. In the Internet application, with the increasing requirements of the site for hardware performance, response speed, service stability, data reliability and so on, a single server is unable to meet the requirements
Using expensive minicomputers and mainframes
Building a service cluster with a normal server
####According to the target differences of the cluster, it can be divided into three categories:
In order to improve the response ability of the application system, handle more access requests as much as possible and reduce latency, the overall performance of high concurrency and high load (LB) is achieved;
The load distribution of LB depends on the shunting algorithm of the master node.
In order to improve the reliability of application system and reduce the interruption time as much as possible, ensure the continuity of service and achieve the fault-tolerant effect of high availability (HA);
HA works in two modes: duplex mode and master-slave mode.
In order to improve the CPU operation speed, expand the hardware resources and analysis ability of the application system, the high performance computing (HPC) ability equivalent to that of the large-scale and supercomputer is obtained;
The high performance of high performance computing cluster depends on "distributed computing" and "parallel computing"
The special hardware and software integrate the CPU, memory and other resources of multiple servers together to realize the computing power that only large-scale and supercomputer can possess.
Load balancing cluster is the most commonly used cluster type in Enterprises
Three modes of cluster load scheduling technology
Address translation (NAT mode)
IP tunnel (TUN mode)
Direct routing (DR mode)
First layer: load scheduler
- It is only responsible for responding to the client's request and distributing the request to the server in the server pool through the load scheduling algorithm. It is the only access to the whole cluster. It uses the public vip (Virtual IP) address, also known as the cluster IP address
Layer 2: server pool
- It is used to provide actual application services for clients. Each real server (the server in the server pool is called real server or node server) has independent RIP (real IP), and only processes the customer requests distributed by the scheduler
Layer 3: shared storage
- Provide a stable and consistent file access service for all nodes in the server pool to ensure the consistency of cluster files (that is, even if you are not accessing the same node server, you will see the same content
Distribute the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server
#####Weighted round robin
According to the real server's processing power, the received access requests are distributed in turn. The scheduler can automatically query the load of each node and dynamically adjust its weight
Ensure that the server with strong processing capacity bears more access traffic
According to the number of established connections of the real server, priority will be given to the received access requests
Allocate nodes with the least number of connections
#####Weighted least connections
When the performance of the server nodes varies greatly, the weight can be adjusted automatically for the real server
Nodes with higher weight will bear a larger proportion of active connection load
- LVS is a four layer load balancing, which is based on the fourth layer (transport layer) of OSI model. There is TCP/UDP on the transport layer. LVS supports TCP/UDP load balancing.
- Because LVS is a four layer load balancing, its efficiency is very high compared with other high-level load balancing solutions, such as DNS domain name rotation resolution, application layer load scheduling, client scheduling and so on.
1 centos7 as LVS gateway (add a network card)
Two centos7 as web servers (web1, web2)
1 centos7 as NFS shared storage service (add 2 hard disks)
1 win7 as client
win7 client accesses 18.104.22.168 web address, through nat address conversion, polling access web1 and web2 host;
Set up the NFS network file storage service.
(1) You need to add two hard disks to the NFS storage server, and restart after adding them. You can enter the command LS / dev / to see if the addition is successful
[root@nfs ~]# fdisk /dev/sdb 'partition to disk sdb' [root@nfs ~]# Mkfs.xfs/dev/sdb1 'format' [root@nfs ~]# fdisk /dev/sdc 'partition to disk sdc' [root@nfs ~]# mkfs.xfs /dev/sdc1
[root@nfs ~]# mkdir /opt/kg /opt/ac [root@nfs ~]# vim /etc/fstab "add auto mount settings" 'Add 2 lines' /dev/sdb1 /opt/kg xfs defaults 0 0 /dev/sdc1 /opt/ac xfs defaults 0 0 [root@nfs ~]# mount -a [root@nfs ~]# df -hT
[root@nfs ~]# systemctl stop firewalld.service [root@nfs ~]# setenforce 0 [root@nfs ~]# RPM - Q nfs utils' nfs components installed ' nfs-utils-1.3.0-0.48.el7.x86_64 [root@nfs ~]# rpm -q rpcbind rpcbind-0.2.0-42.el7.x86_64 'Remote procedure call component installed'
[root@nfs ~]# vim /etc/exports '192.168.100.0 Share accessible addresses' /opt/kg 192.168.100.0/24(rw,sync,no_root_squash) /opt/ac 192.168.100.0/24(rw,sync,no_root_squash)
[root@nfs ~]# systemctl start nfs [root@nfs ~]# systemctl start rpcbind [root@nfs ~]# showmount -e Export list for nfs: /opt/ac 192.168.100.0/24 /opt/kg 192.168.100.0/24
[root@nfs ~]# VIM / etc / sysconfig / network scripts / ifcfg-ens33 "modify IP address"
[root@nfs ~]# service network restart Restarting network (via systemctl): [ Determine ] [root@nfs ~]# ifconfig "view IP address"
(the configuration of the two servers is the same)
[root@web1 ~]# yum install httpd -y [root@web1 ~]# systemctl stop firewalld.service [root@web1 ~]# setenforce 0
[root@web1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 [root@web1 ~]# service network restart
[root@web2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 [root@web2 ~]# service network restart
[root@web1 ~]# showmount -e 192.168.100.120 Export list for 192.168.100.120: /opt/ac 192.168.100.0/24 /opt/kg 192.168.100.0/24 'Both servers need to be verified'
[root@web1 ~]# vim /etc/fstab 'Add mount settings at the end' 192.168.100.120:/opt/kg /var/www/html nfs defaults,_netdev 0 0 [root@web2 ~]# vim /etc/fstab 192.168.100.120:/opt/ac /var/www/html nfs defaults,_netdev 0 0 [root@web1 ~]# mount -a 'make mount file effective' [root@web1 ~]# df -hT 'view mount'
[root@web1 ~]# cd /var/www/html/ [root@web1 html]# vim index.html 'Add to web1 Home page content' <h1>this is kg web</h1> [root@web1 html]# systemctl start httpd 'start service' [root@web1 html]# netstat -ntap | grep 80 tcp6 0 0 :::80 :::* LISTEN 6765/httpd [root@web2 ~]# cd /var/www/html/ [root@web2 html]# vim index.html 'Add to web2 Home page content' <h1>this is ac web</h1> [root@web2 html]# systemctl start httpd 'start service' [root@web2 html]# netstat -ntap | grep 80 tcp6 0 0 :::80 :::* LISTEN 4815/httpd
[root@lvs ~]# yum install ipvsadm -y
[root@lvs ~]# cd /etc/sysconfig/network-scripts/ [root@lvs network-scripts]# Cp-p ifcfg-ens33 ifcfg-ens36 "copy one as another network card ens36" [root@lvs network-scripts]# ls ifcfg-ens33 ifdown-post ifup-eth ifup-sit ifcfg-ens36 ......
[root@lvs network-scripts]# vim ifcfg-ens33
[root@lvs network-scripts]# vim ifcfg-ens36
[root@lvs network-scripts]# service network restart
[root@lvs network-scripts]# vim /etc/sysctl.conf 'Add to end' net.ipv4.ip_forward=1 'Start route conversion function' [root@lvs ~]# sysctl -p 'start' [root@lvs network-scripts]# Iptables-f "situation transfer" [root@lvs network-scripts]# iptables -t nat -F 'clear nat address translation table' [root@lvs network-scripts]# Iptables - t NAT - a posting - O ens36 - s 192.168.100.0/24 - J SNAT -- to source 22.214.171.124 'add address translation rule'
[root@lvs ~]# cd /etc/sysconfig/network-scripts/ [root@lvs network-scripts]# Modprobe IP vs' load ' [root@lvs network-scripts]# ipvsadm --save > /etc/sysconfig/ipvsadm [root@lvs network-scripts]# systemctl start ipvsadm
[root@lvs network-scripts]# cd /opt/ [root@lvs opt]# vim nat.sh 'Add script, use polling algorithm to visit two websites' #! /bin/bash ipvsadm -C ipvsadm -A -t 126.96.36.199:80 -s rr 'polling' ipvsadm -a -t 188.8.131.52:80 -r 192.168.100.110:80 -m ipvsadm -a -t 184.108.40.206:80 -r 192.168.100.111:80 -m 'mapping web Two servers' ipvsadm [root@lvs opt]# chmod +x nat.sh "give nat.sh script permission" [root@lvs opt]# . / nat.sh 'execute script'
The external network client directly maps to the internal network web interface by visiting the gateway of the external network, and the internal network web is displayed by polling, that is, displaying the web1 interface once and the web2 interface once, which can effectively relieve the pressure of the web server. (if there is no change during the access, clear the cache)