centos7 implementation of network card binding technology

1.centos7 Implementation of network card binding technology

one point one introduce

Binding multiple network cards to the same IP address to provide external services can achieve high availability or load balancing. It is not allowed to directly set the same IP address for two network cards. Through binding, a virtual network card provides external connection, and the physical network card is modified to the same MAC address

one point two Binding working mode

Mode 0 (balance-rr)
Round robin strategy: send packets on each slave interface in order from beginning to end. This mode provides load balancing and fault tolerance capabilities
Mode 1 (active-backup)
Active backup (active / standby) policy: only one slave is activated, and other slaves will be activated only when the active slave interface fails. To avoid confusion in the switch, the bound MAC address is visible only on one external port
Mode 3 (broadcast)
Broadcast strategy: transmit all messages on all slave interfaces, and provide fault-tolerant active backup, balance TLB and balance ALB modes without any special configuration of the switch. Other binding modes require configuration of switches to consolidate links. For example, Cisco switches need to use EtherChannel in modes 0, 2 and 3, but LACP and EtherChannel are needed in mode 4

one point three Configure binding

1.3.1 By default, the binding module is not loaded. You need to load the binding module first

[root@c2 ~]# lsmod |grep bonding
[root@c2 ~]# modprobe bonding
[root@c2 ~]# lsmod |grep bonding
bonding               152656  0 

1.3.2 Profile of new bond network interface

[root@c2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=10.1.1.243
PREFIX=24
GATEWAY=10.1.1.254
DNS1=202.96.128.166
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="mode=0 miimon=100"

1.3.3 Configuration files of eth0 and eth1

[root@c2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Generated by dracut initrd
NAME="eth0"
DEVICE="eth0"
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
[root@c2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Generated by dracut initrd
NAME="eth1"
DEVICE="eth1"
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
MASTER=bond0
SLAVE=yes

one point four Reload network configuration

[root@c2 ~]# nmcli connection reload        
[root@c2 ~]# systemctl restart network.service

one point five View binding status

[root@c2 ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:ba:03:9e
Slave queue ID: 0
[root@c2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.243/24 brd 10.0.1.255 scope global noprefixroute bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feba:394/64 scope link 
       valid_lft forever preferred_lft forever

one point six test

1.6.1 ping on another server

[root@c1 ~]# ping -c 1000 c2
PING c2 (10.0.1.243) 56(84) bytes of data.
64 bytes from c2 (10.0.1.243): icmp_seq=1 ttl=64 time=0.232 ms
64 bytes from c2 (10.0.1.243): icmp_seq=2 ttl=64 time=0.215 ms
64 bytes from c2 (10.0.1.243): icmp_seq=3 ttl=64 time=0.255 ms
64 bytes from c2 (10.0.1.243): icmp_seq=4 ttl=64 time=0.255 ms
64 bytes from c2 (10.0.1.243): icmp_seq=5 ttl=64 time=0.249 ms
64 bytes from c2 (10.0.1.243): icmp_seq=6 ttl=64 time=0.236 ms
........

1.6.2 Turn off eth1 network port

[root@c2 ~]# nmcli connection down eth1

1.6.3 result

From c1 (10.0.1.242) icmp_seq=255 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=256 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=257 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=258 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=259 Destination Host Unreachable
From c1 (10.0.1.242) icmp_seq=260 Destination Host Unreachable
64 bytes from c2 (10.0.1.243): icmp_seq=302 ttl=64 time=0.243 ms
64 bytes from c2 (10.0.1.243): icmp_seq=303 ttl=64 time=0.234 ms
64 bytes from c2 (10.0.1.243): icmp_seq=304 ttl=64 time=0.210 ms
64 bytes from c2 (10.0.1.243): icmp_seq=305 ttl=64 time=0.213 ms
64 bytes from c2 (10.0.1.243): icmp_seq=306 ttl=64 time=0.236 ms
64 bytes from c2 (10.0.1.243): icmp_seq=307 ttl=64 time=0.291 ms

Note: when the binding mode is 0 load balancing, the network card will switch slowly after failure; when the mode is 1, the preparation mode will switch quickly. You can modify the binding of ficfg-bond0_ Opts = "mode = 1 miimon = 100" to test

Tags: Operation & Maintenance network Mac

Posted on Tue, 19 May 2020 04:24:03 -0400 by JovanLo