Docker principle supplement: verifying Cgroup through resource restriction

The full name of Cgroup is Control Group, which is used to limit the upper limit of system resources used by a process group, including CPU, memory, Block I/O, etc; Put a group of processes in a Cgroup and allocate the specified available resources to the Cgroup to control the available resources of this group of processes; Docker creates a folder for resource restriction for each container under the / sys/fs/cgroup/docker path of the operating system. All resource types in the path can be restricted by Cgroup.
① . memory limit
The memory that can be used by the container includes two parts: physical memory and swap partition. If you only specify the value of - m and do not specify the value of - memory swap when starting the container, then -- memory swap defaults to twice the value of - m.

# -m set the memory usage limit-- Memory swap sets the usage limit of memory + swap partitions

[root@node ~]# docker run -m 200M --memory-swap=300M centos:7

Note: the container must be running and can be in the directory / sys / FS / CGroup / memory / docker / docker_ Memory.limit under ID /_ in_ Bytes displays the memory limit of the container.
Use progrium/stress tool to perform pressure test on the container to explore the mechanism of memory quota.
❶. Use progrium/stress tool to test memory. Under the condition of not exceeding the limit (memory + swap), the working mechanism of memory use is: allocate memory to the container, use the container, release the memory, and then cycle the process all the time;

# --vm specifies the number of memory worker threads to start-- vm bytes specifies the amount of memory allocated for each thread

[root@node ~]# docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 250M
...
stress: dbug: [8] allocating 262144000 bytes ... #Allocate memory
stress: dbug: [8] touching bytes in strides of 4096 bytes ...
stress: dbug: [8] freed 262144000 bytes  #Free memory
stress: dbug: [8] allocating 262144000 bytes ... #Reallocate memory
stress: dbug: [8] touching bytes in strides of 4096 bytes ..
stress: dbug: [8] freed 262144000 bytes  #Free memory again

❷ when the limit (memory + swap) is exceeded in the test with progrium/stress tool, the container cannot be started directly and exits

[root@node ~]# docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 350M

...
stress: FAIL: [1] (422) kill error: No such process
stress: FAIL: [1] (452) failed run completed in 0s

② . CPU share
By default, all containers can use host CPU resources equally and without restrictions; The cpu shares set is not the absolute number of CPU resources, but a relative weight value;

# -c, like the -- CPU shares parameter, specifies the CPU share for the container
[root@node ~]# docker run --name container_a -c 1024 centos:7
[root@node ~]# docker run --name container_b --cpu-shares 512 centos:7

Note: the container must be running and can be in the directory / sys / FS / CGroup / memory / docker / docker_ CPU under ID /_ Shares displays the CPU share of the container.
In the case of CPU resource preemption, the final CPU resources that a container can allocate depend on the proportion of its cpu share to the total cpu share of all containers.
Use progrium/stress tool to perform pressure test on the container to explore the mechanism of CPU quota.
❶ run two containers using the progrium/stress image_ A and Container_B. The CPU share is set to 1024 and 512 respectively (the CPU allocation ratio is 2:1)

[root@node ~]# lscpu | grep 'CPU(s)'

CPU(s):                2   #This machine has 2 CPU s

# --CPU is used to set the number of working threads. At present, the host has two CPUs, and two working threads are required to occupy CPU resources

[root@node ~]# docker run -d -c 1024 --name Container_A progrium/stress --cpu 2

[root@node ~]# docker run -d -c 512 --name Container_B progrium/stress --cpu 2

❷. Use top to check the system resource usage. You can see that the utilization allocation ratio is 2:1 in% CPUs (the resources allocated by Container_A are twice that of Container_B). Because there are two CPUs, each CPU is allocated according to the ratio of 2:1.

[root@node ~]# top

...
PID USER      PR  NI    ...SHR S  %CPU %MEM     TIME+ COMMAND   
7167 root      20   0    ...0 R  64.8  0.0   2:56.89 /usr/bin/stress
7164 root      20   0    ...0 R  64.5  0.0   2:56.14 /usr/bin/stress
7416 root      20   0    ...0 R  36.9  0.0   1:10.96 /usr/bin/stress
7417 root      20   0    ...0 R  32.6  0.0   1:11.09 /usr/bin/stress

❸. Stop the container at this time_ A: you can see the running Container_B will monopolize two CPU resources because there is no resource preemption at this time.

[root@node ~]# docker stop  Container_A
Container_A
[root@node ~]# top
...   
PID USER      ...SHR S  %CPU %MEM     TIME+ COMMAND  
7416 root      ...96      0 R  99.7  0.0   3:14.73 stress  
7417 root      ...96      0 R  99.0  0.0   3:15.13 stress 

③ . Block IO bandwidth limit
Block IO refers to the reading and writing of disks. docker can control the bandwidth of container reading and writing disks by setting weight and limiting bps and iops.
(1) . weight
By default, all containers can read and write disks equally. You can change the priority of container block IO by setting the – blkio weight parameter Similar to – CPU shares, blkio weight sets the relative weight value, which is 500 by default.
(2) . limit bps and iops
bps is byte per second, the amount of data read and written per second. The bps of the device is limited by -- device read bps / -- device write bps;
iops is io per second, the number of IOS per second. Limit the iops of the device through -- device read iops / -- device write iops.
❶. Verify bandwidth limit: run a base image, which is 239 MB/s without IO bandwidth limit

[root@node ~]# docker run -it centos:7  
[root@231682cd2622 /]# dd if=/dev/zero of=test.out bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.438719 s, 239 MB/s

❷ when the write IO is limited to 30M, it is 30MB/s. note that the oflag=direct parameter must be added when writing IO in the container, so that - device write BPS can take effect.
Note: check the disk sda used by the container, so the limit parameter can only be limited to / dev/sda.

[root@node ~]# docker run -it --device-write-bps /dev/sda:30M centos:7 

[root@f8163e8c0552 /]# lsblk  #View the sda disks used by the container
...
sda      8:0    0  100G  0 disk 
|-sda1   8:1    0    1G  0 part 
`-sda2   8:2    0   99G  0 part
[root@f8163e8c0552 /]# dd if=/dev/zero of=test.out bs=1M count=100 
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.412428 s, 254 MB/s

[root@f8163e8c0552 /]# dd if=/dev/zero of=test.out bs=1M count=100 oflag=direct  #The restriction on bps will not take effect until this parameter is added
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 3.3218 s, 31.6 MB/s

Tags: Docker Kubernetes cloud computing Cloud Native

Posted on Thu, 14 Oct 2021 15:38:38 -0400 by tkj