Load balancing lvs DR mode

preface client -> dns -> cdn(cache) -> server The client accesses dns first and then to the cache, otherwise th...
preface
lvs document
lvs construction
Problem 1: the DR mode requires the same VIP, otherwise it cannot complete three handshakes, and the arp cache problem is caused by the same VIP.
Solution 1: directly modify the kernel and disable the arp protocol.

preface

client -> dns -> cdn(cache) -> server
The client accesses dns first and then to the cache, otherwise the traffic is too large to directly access the server.
What we need to do is load balancing the server
aliyun
Client - > DNS - > CDN (CACHE) - > SLB server SLB load balancing layer, that is, load balancing and high availability
Lb (load balancing) + HA (high availability) high availability ensures the normal operation of the load balancer and balanced scheduling of the back-end server
client -> dns -> cdn(cache) -> LB +HA server
Client - > DNS - > CDN (CACHE) - > LB + HA - > Web server (processing static) - > (processing dynamic) for example: Tomcat (application server) processing java - >
There will be load balancing in each layer
How to schedule the web server, so the logical access layer will be added conditionally. There are too many back-end application servers, and the front end cannot write a specific connection in the code, so it accesses a unified scheduling layer logical layer.
Client - > DNS - > CDN (CACHE) - > LB + HA - > Web server (processing static) multiple - > logical access layer - > processing dynamic) for example: Tomcat (application server) multiple processing java - > DB - > storage
Applications placed on physical machines have high maintenance costs, so all these should be placed in containers.
docker + k8s + openstack + hadoop (big data, distributed file system plus Parallel Computing) + gp (more popular, big data)

Review depends on the principle of lvs

lvs document

Chinese documents

lvs construction

1. Experimental environment: server1 as scheduler and server2 as really server.
Load balancing 2 and 3 nodes through 1 (web server)
server1:

[root@server1 yum.repos.d]# yum install ipvsadm -y

This tool is designed for the client to write lvs policies
ipvsadm is a lvs kernel function. After installation, modules will be installed in the kernel
The kernel module of linux is dynamic. When you use it, it will be loaded automatically or manually.

[root@server1 yum.repos.d]# lsmod | grep ip_vs ip_vs 145497 0 nf_conntrack 133095 1 ip_vs libcrc32c 12644 3 xfs,ip_vs,nf_conntrack

Install httpd on server2 and 3 for testing:

[root@server2 ~]# yum install httpd -y
[root@server2 ~]# systemctl start httpd [root@server2 ~]# cd /var/www/html/ [root@server2 html]# ls [root@server2 html]# echo server2 > index.html [root@server2 html]# ls index.html [root@server2 html]# curl localhost server2

server3 repeats the operation of server2

Related extension

[root@server1 yum.repos.d]# ipvsadm --help

View local documents

[root@foundation38 ~]# cd /run/media/kiosk/Backup\ Plus/pub/docs/lvs/ [root@foundation38 lvs]# evince Ali lvs.pdf&

A adds a service - t tcp service 172.25.138.100 virtual ip. If the ip is not occupied, - s scheduling and rr are the most balanced.

[root@server1 yum.repos.d]# ipvsadm -A -t 172.25.138.100:80 -s rr

-g dr mode direct connection, adding the real server

[root@server1 yum.repos.d]# ipvsadm -a -t 172.25.138.100:80 -r 172.25.138.2:80 -g [root@server1 yum.repos.d]# ipvsadm -a -t 172.25.138.100:80 -r 172.25.138.3:80 -g

TCP 172.25.138.100: the virtual service created by 80 RR can be multiple and schedule different hosts.

[root@server1 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.25.138.100:80 rr -> 172.25.138.2:80 Route 1 0 0 -> 172.25.138.3:80 Route 1 0 0

But we didn't add virtual ip

[root@server1 yum.repos.d]# ip addr | grep 172.25.138.100

Add virtual ip

[root@server1 yum.repos.d]# ip addr add 172.25.138.100/24 dev eth0

Host cannot access

[root@foundation38 ~]# curl 172.25.0.100

But server1 received the request

[root@server1 ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.25.0.100:80 rr -> 172.25.0.2:80 Route 1 0 3 -> 172.25.0.3:80 Route 1 0 2

It is proved that after the packet arrives at the scheduler, the scheduler schedules the data to 2\3, proving that 2 or 3 can not respond to the request accordingly.
Because it is a tcp protocol, tcp needs to meet three handshakes and four breakups
Because the original address src:172.25.0.250 and the target address dst:172.25.0.100 pass through the scheduler, and the target address becomes 0.2 \ 0.3
When the target address of the client remains unchanged, the target of the tcp three-time handshake cannot be changed, and the address remains unchanged, so its packets are forwarded to server2 at layer 2,
Its src and dst are still 0.100, and server2 does not have 100vip. It is considered that it is going the wrong way, and the data packet is discarded by the machine kernel.

So in server2

[root@server2 ~]# ip addr add 172.25.138.100/32 dev eth0 [root@server3 ~]# ip addr add 172.25.138.100/32 dev eth0

Test client:

[kiosk@foundation38 Desktop]$ curl 172.25.138.100 server3 [kiosk@foundation38 Desktop]$ curl 172.25.138.100 server2

Problem 1: the DR mode requires the same VIP, otherwise it cannot complete three handshakes, and the arp cache problem is caused by the same VIP.

Address of local arp cache:

[root@foundation38 ~]# arp -an | grep 100 ? (172.25.138.100) at 52:54:00:58:2e:f1 [ether] on br0

Delete the address of arp local cache:

[root@foundation38 ~]# arp -d 172.25.138.100 [root@foundation38 ~]# arp -an | grep 100

arp protocol will be learned locally in a VLAN in the form of broadcast
ping is equivalent to learning again
The address obtained is different. If it is the same, it will be normal. server2 and server3 at a time. Now the address has changed.
The address has changed to server3

[root@foundation38 ~]# ping 172.25.138.100 [root@foundation38 ~]# arp -an | grep 100 ? (172.25.138.100) at 52:54:00:57:08:6c [ether] on br0

Everything has become sever3

[root@foundation38 ~]# curl 172.25.138.100 server3 [root@foundation38 ~]# curl 172.25.138.100 server3

At the same time, the scheduler cannot receive a response

[root@server1 ~]# ipvsadm -ln Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.25.138.100:80 rr -> 172.25.138.2:80 Route 1 0 0 -> 172.25.138.3:80 Route 1 0 0

The arp cache address becomes sever3

[root@server3 ~]# ip addr 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:57:08:6c brd ff:ff:ff:ff:ff:ff 52:54:00:57:08:6c This address is the same as the host above arp The cache address is the same

arp learns who responds first and caches who learns who first. For the same VLAN, the address response level is the same.
Cause: three nodes have the same vip address under the same VLAN, which conflicts

Solution 1: directly modify the kernel and disable the arp protocol.

Because the arp protocol cannot be routed in a broadcast, it must mask the arp response in a VLAN in server2.
Disable in the kernel: 1. Modify the kernel parameters directly in the kernel
2. The method of arp firewall provided by RHEL system is only valid for arp protocol.
iptable has nothing to do with iptable. iptable is the firewall of packets. All packets pass it.
Usage 2:
Related documents are on the host

Plus/pub/docs/rhel6cluster [root@foundation38 rhel6cluster]# evince Red_Hat_Enterprise_Linux-6-Virtual_Server_Administration-zh-TW.pdf &

Install the plug-in on server2

[root@server2 ~]# yum install -y arptables_jf [root@server3 ~]# yum install -y arptables_jf

The arp firewall has three chains: input (incoming packets), output (outgoing packets), and forward

[root@server2 ~]# arptables -L Chain INPUT (policy ACCEPT) Chain OUTPUT (policy ACCEPT) Chain FORWARD (policy ACCEPT)

2 \ 3 modify policy
INPUT:
-A add a policy on the INPUT link
When INPUT comes in - d, the target address accessed is 172.25.138.100 -j, which is discarded by DROP

[root@server2 ~]# arptables -A INPUT -d 172.25.138.100 -j DROP

Out arp(OUTPUT)
arp protocol itself is also a broadcast protocol. When it is connected to VLAN, it will automatically transfer its mac address to broadcast for other hosts to learn.

If you go out with 172.25.138.100, convert the address to 172.25.138.2 (convert to eth0 the fixed address of the publishing website)

[root@server2 ~]# arptables -A OUTPUT -s 172.25.138.100 -j mangle --mangle-ip-s 172.25.138.2

Save the policy, which is in memory, and the running kernel will take effect automatically.
But we have to save it in the kernel, or it will disappear as soon as we restart.

[root@server2 ~]# arptables-save > /etc/sysconfig/arptables [root@server2 ~]# cat /etc/sysconfig/arptables *filter :INPUT ACCEPT :OUTPUT ACCEPT :FORWARD ACCEPT -A INPUT -j DROP -d 172.25.138.100 -A OUTPUT -j mangle -s 172.25.138.100 --mangle-ip-s 172.25.138.2

Every time you restart, your system will automatically read the configuration file in the service and take effect.

[root@server2 ~]# systemctl status arptables.service

Refresh the memory arp policy and restart the service because the system reads the arp file.

[root@server2 ~]# arptables -F [root@server2 ~]# arptables -L Chain INPUT (policy ACCEPT) Chain OUTPUT (policy ACCEPT) Chain FORWARD (policy ACCEPT)

-nL does not resolve

[root@server2 ~]# systemctl restart arptables.service [root@server2 ~]# arptables -nL Chain INPUT (policy ACCEPT) -j DROP -d 172.25.138.100 Chain OUTPUT (policy ACCEPT) -j mangle -s 172.25.138.100 --mangle-ip-s 172.25.138.2 Chain FORWARD (policy ACCEPT)

Give the policy to server3:

[root@server2 ~]# scp /etc/sysconfig/arptables server3:/etc/sysconfig/

server3: modify the configuration and change ip 2 to 3

[root@server3 ~]# vim /etc/sysconfig/arptables

client: Test

[root@foundation38 ~]# arp -d 172.25.138.100 [root@foundation38 ~]# curl 172.25.138.100 server3 [root@foundation38 ~]# curl 172.25.138.100 server2

24 October 2021, 08:24 | Views: 7721

Add new comment

For adding a comment, please log in
or create account

0 comments