Debian and its type system, network link aggregation configuration

Introduction to link aggregation

1. Introduction to link aggregation

Link Aggregation (English: Link Aggregation) is a computer network term, which refers to the aggregation of multiple physical ports to form a logical port to realize the load sharing of outbound / inbound traffic throughput at each member port. The switch determines which member port network packets are sent to the opposite switch according to the port load sharing policy configured by the user. When the switch detects the link failure of one of the member ports, it stops sending packets on this port, recalculates the sending port of messages in the remaining links according to the load sharing strategy, and acts as the receiving and transmitting port again after the failure port recovers. Link Aggregation is a very important technology in increasing link bandwidth, realizing link transmission elasticity and engineering redundancy.

2. Two ways to configure link aggregation in Linux

The link aggregation of network cards generally includes two modes: "bond" and "team". The "bond" mode can add up to two network cards, and the "team" mode can add up to eight network cards.

Exercise - network card link aggregation

(1) bonding has seven working modes
0: (balance RR) round robin policy: (balanced polling Policy): data packets are transmitted sequentially until the last transmission is completed. This mode provides load balancing and fault tolerance.
1: (active backup) active backup policy: only one device is active. One goes down and the other is immediately converted from backup to primary device. The mac address is externally visible. This mode provides fault tolerance.
2: (balance xor) xor policy: (balance policy): select the transmission device according to the Boolean value of [(source MAC address xor destination MAC address) mod device number]. This mode provides load balancing and fault tolerance.
3: (broadcast) Broadcast policy: (broadcast Policy): transmit all data packets to all devices. This mode provides fault tolerance.
4: (802.3ad) IEEE 802.3ad Dynamic link aggregation. IEEE 802.3ad dynamic link aggregation: create aggregation groups that share the same speed and duplex settings. This mode provides fault tolerance. Each device needs driver based re acquisition speed and full duplex support; If a switch is used, the switch also needs to enable 802.3ad mode.
5: (balance TLB) adaptive transmit load balancing: channel binding does not require the support of a dedicated switch. The outgoing traffic is distributed to each device according to the current load. It is received by the current device. If the received device fails to communicate, another device will take over the mac address being processed by the current device.
6: (balance ALB) adaptive load balancing: (adapter load balancing): including mode 5. ARP negotiates the received load. The binding driver intercepts the request sent by ARP in the local system, and overwrites the original address of the slave device with one of the hardware addresses. It's like different people use different hardware addresses on the server.

Add 2 new network cards in deepin system (add network cards in virtual machine here)
nmcli device show | grep DEVICE # found two new network cards ens38 and ens39

root@fff-PC:~# nmcli device 
DEVICE   TYPE      STATE   CONNECTION 
ens33    ethernet  Connected  test       
nm-bond  bond      Connected  bond0      
ens38    ethernet  Connected  ens38      
ens39    ethernet  Connected  ens39      
lo       loopback  Not entrusted

At this time, there are three network cards on the system.
2) New bond
The command is as follows:

nmcli connection add type bond con-name bond0 mode active-backup ipv4.addresses 192.168.58.166/24"   #It means adding a bond with the name of bond0, the working mode is primary and standby, and the IP is "192.168.58.166".
nmcli connection add con-name ens38 ifname ens39 type bond-slave master bond0
#Add the ens33 network card connection to this bond.
nmcli connection add con-name ens37 ifname ens39 type bond-slave master bond0
#Add the ens37 network card connection to this bond.
nmcli connection up ens33  #Start bond slave ens33
nmcli connection up ens37  #Start bond slave ens37
nmcli connection up bond0  #Start bond0
ip a  #Check that the mac addresses of the ens33 and ens37 network cards are the same
cat /proc/net/bonding/nm-bond  #View the effective bond

The results are as follows:

root@fff-PC:~# nmcli connection 
NAME   UUID                                  TYPE      DEVICE  
bond0  43653f36-5d14-4822-935c-61ed6aa51387  bond      nm-bond 
ens38  35aad3cc-b249-4892-9ff3-a8a817f4104c  ethernet  ens38   
ens39  9ec6fe9d-1a9f-46df-b6ff-d36a6a546498  ethernet  ens39   
test   9c60fbe4-ef58-419e-9fab-2060311b6301  ethernet  ens33   
root@fff-PC:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:1b:d8:95 brd ff:ff:ff:ff:ff:ff
    inet 192.168.58.106/24 brd 192.168.58.255 scope global dynamic noprefixroute ens33
       valid_lft 84865sec preferred_lft 84865sec
    inet6 fe80::cc68:b46a:5012:8ed6/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens38: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master nm-bond state UP group default qlen 1000
    link/ether 00:0c:29:1b:d8:9f brd ff:ff:ff:ff:ff:ff
4: ens39: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master nm-bond state UP group default qlen 1000
    link/ether 00:0c:29:1b:d8:9f brd ff:ff:ff:ff:ff:ff
5: nm-bond: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:1b:d8:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.58.105/24 brd 192.168.58.255 scope global dynamic noprefixroute nm-bond
       valid_lft 85633sec preferred_lft 85633sec
    inet 192.168.58.166/24 brd 192.168.58.255 scope global secondary noprefixroute nm-bond
       valid_lft forever preferred_lft forever
    inet6 fe80::cea3:7ebf:9534:1396/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
root@fff-PC:~# 
root@fff-PC:~# cat -n /proc/net/bonding/nm-bond 
     1	Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
     2	
     3	Bonding Mode: fault-tolerance (active-backup)
     4	Primary Slave: None
     5	Currently Active Slave: ens38
     6	MII Status: up
     7	MII Polling Interval (ms): 100
     8	Up Delay (ms): 0
     9	Down Delay (ms): 0
    10	Peer Notification Delay (ms): 0
    11	
    12	Slave Interface: ens38
    13	MII Status: up
    14	Speed: 1000 Mbps
    15	Duplex: full
    16	Link Failure Count: 0
    17	Permanent HW addr: 00:0c:29:1b:d8:9f
    18	Slave queue ID: 0
    19	
    20	Slave Interface: ens39
    21	MII Status: up
    22	Speed: 1000 Mbps
    23	Duplex: full
    24	Link Failure Count: 0
    25	Permanent HW addr: 00:0c:29:1b:d8:a9
    26	Slave queue ID: 0

(4) Delete link aggregation in bond mode
The command is as follows:

nmcli connection delete bond0
nmcli connection delete ens38
nmcli connection delete ens39

systemctl restart NetworkManager

2. Exercise - bridging
Restore virtual machine to initial state
Create a bridged network connection test br:

root@fff-PC:~# nmcli connection add type bridge con-name test-br ifname bridge
 connect "test-br" (539bcc2b-ba2f-43f5-8934-d9702d57ca09) Successfully added.
root@fff-PC:~# 

Modify the test br IP address:

root@fff-PC:~# nmcli connection modify test-br ipv4.method manual ipv4.addresses 192.168.58.166/24 ipv4.gateway 192.168.58.1 ipv4.dns 114.114.114.114
root@fff-PC:~# 

Bridge the network card ens38 to test br:

nmcli connection add con-name uosbrslave1 type bridge-slave  ifname ens38 master test-br

Start bridge network configuration:

nmcli connection up test-br

Restart the network configuration service:

systemctl restart NetworkManager

see:

nmcli connection show		#If a duplicate uosbr configuration file appears, you need to delete the wrong uuid, and then nmcli connection up uosbr and systemctl restart NetworkManager
ping -I uosbr 192.168.100.202  	#-I indicates the network card from which the ping packet is sent

3. Experiment - delete virtual bridge network card virbr0

Method 1
apt-get remove -y libvirt-*
reboot
nmcli connection show			#Virtual bridge network card virbr0 has been deleted
 Method 2
virsh net-list				    #View virtual network devices
virsh net-destroy default		#Delete the device named default
virsh net-undefine default		#Delete relevant information in the configuration file
systemctl restart libvirtd
nmcli connection show			#Virtual bridge network card virbr0 has been deleted

Tags: Linux Operation & Maintenance

Posted on Fri, 19 Nov 2021 15:40:09 -0500 by tinuviel