Introduction to docker container technology


  • Container is a basic tool; Generally refers to any tool that can be used to contain other articles, which can be partially or completely closed and used to contain, store and transport articles; Objects can be placed in containers, and containers can protect the contents;
  • Human beings have used containers for at least 100000 years, or even millions of years;
  • Type of container
    Bottle - a container whose mouth is narrower than the abdomen and whose neck is longer
    Pot - refers to those vessels with large openings and generally near cylindrical shape
    Box - usually a cube or cylinder. Shape fixed
    Basket - woven with strips
    Barrel - a cylindrical container
    Bag - a container made of flexible material, the shape of which may vary depending on the contents
    Urn - usually a pottery container with a small mouth and a large belly
    Bowl - a container used to hold food
    Cabinet - a piece of furniture consisting of boxes
    Sheath - container for loading the blade

The difference between traditional virtualization and container

virtual machineComplete system functions, better isolation, large occupied space (usually in G), more resources occupied by operation and slow startup (minute startup)
containerOnly the core environment in which the program is running does not need to install a direct running image. It has general isolation, is very small, takes up very little space (generally in M) and is fast to start (second level start)

Virtualization is divided into the following two categories:

Host level virtualization

  • Full virtualization
  • Semi virtualization
    Container level virtualization

Resources separated by container:

  • UTS (host name and domain name)
  • Mount (file system mount tree)
  • IPC
  • PID process tree
  • User
  • Network(tcp/ip protocol stack)

Linux container technology

Linux container is not a new concept. The earliest container technology can be traced back to the chroot tool on the Unix series operating system in 1982 (until today, the mainstream Unix and Linux operating systems still support and carry this tool).

Linux Namespaces

Namespaces is a powerful feature introduced by the Linux kernel for container virtualization.

Each container can have its own independent namespace, and the applications running in it are like running in an independent operating system. Namespaces ensure that containers do not affect each other.

NameSpacesSystem call parametersIsolate contentKernel version
UTSCLONE_NEWUTSHost name and domain name2.6.19
IPCCLONE_NEWIPCSemaphores, message queues, and shared memory2.6.19
PIDCLONE_NEWPIDProcess number2.6.24
NetworkCLONE_NEWNETNetwork equipment, network stack, port, etc2.6.29
MountCLONE_NEWNSMount point (file system)2.4.19
UserCLONE_NEWUSERUsers and user groups3.8


Control groups (CGroups) is a feature of Linux kernel, which is used to isolate, restrict and audit shared resources. Only by controlling the resources allocated to containers can Docker avoid system resource competition when multiple containers are running at the same time.

The control group can limit the memory, CPU, disk IO and other resources of the container.

Cggroups can limit the following resources:

  • blkio: block device IO
  • cpu: CPU
  • cpuacct: CPU resource usage report
  • cpuset: collection of CPU s on multiprocessor platforms
  • devices: device access
  • freezer: suspend or resume tasks
  • Memory: memory usage and report
  • perf_event: conduct unified performance test for tasks in cgroup
  • net_cls: the category identifier of the data message created by the task in cgroup

Specifically, the control group provides the following functions:

  • The resource limiting group can be set to not exceed the set memory limit. For example, the memory subsystem can set a memory usage limit for the process group. Once the memory used by the process group reaches the limit, it will issue an Out of Memory warning
  • Priority allows some groups to get more CPU and other resources first through priority
  • Resource Accounting is used to count how many resources the system actually uses for appropriate purposes. You can use the cpuacct subsystem to record the CPU time used by a process group
  • Isolation isolates namespaces for groups so that one group does not see the processes, network connections, and file systems of another group
  • Control suspend, resume and restart operations

After installing Docker, users can see various restrictions applied to Docker group in / sys/fs/cgroup/memory/docker / directory, including

[root@docker ~]# ls /sys/fs/cgroup/memory/
cgroup.clone_children           memory.kmem.slabinfo                memory.memsw.limit_in_bytes      memory.swappiness
cgroup.event_control            memory.kmem.tcp.failcnt             memory.memsw.max_usage_in_bytes  memory.usage_in_bytes
cgroup.procs                    memory.kmem.tcp.limit_in_bytes      memory.memsw.usage_in_bytes      memory.use_hierarchy
cgroup.sane_behavior            memory.kmem.tcp.max_usage_in_bytes  memory.move_charge_at_immigrate  notify_on_release
memory.failcnt                  memory.kmem.tcp.usage_in_bytes      memory.numa_stat                 release_agent
memory.force_empty              memory.kmem.usage_in_bytes          memory.oom_control               system.slice
memory.kmem.failcnt             memory.limit_in_bytes               memory.pressure_level            tasks
memory.kmem.limit_in_bytes      memory.max_usage_in_bytes           memory.soft_limit_in_bytes       user.slice
memory.kmem.max_usage_in_bytes  memory.memsw.failcnt                memory.stat

Users can control groups and limit Docker application resources by modifying these file values.


If we use the container function in the traditional way, we need to write our own code to make system calls to create the kernel. In fact, few people have this ability. LXC (LinuX Container) makes the container technology easier to use and makes the container functions needed into a set of tools, which greatly simplifies the trouble of users using container technology.

LXC is one of the first batch of schemes that really use the complete container technology with a set of simple tools and templates to greatly simplify the use of container technology.

Although LXC greatly simplifies the use of container technology, its complexity is not much reduced compared with using container technology directly through kernel call, because we must learn a set of command tools of LXC, and because the creation of kernel is realized through commands, it is not easy to realize data migration through batch commands. Its isolation is not as powerful as virtual machines.

Later, docker appeared, so to some extent, docker is an enhanced version of LXC.


System versionIP
RedHat 8192.168.147.134

Install and start LXC

[root@localhost ~]# yum -y install lxc
[root@localhost ~]# systemctl enable --now lxc
Created symlink /etc/systemd/system/ → /usr/lib/systemd/system/lxc.service.
[root@localhost ~]# systemctl status lxc
● lxc.service - LXC Container Initialization and Autoboot Code
   Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled)
   Active: active (exited) since Wed 2021-12-01 02:30:00 CST; 2s ago
     Docs: man:lxc-autostart

Use of LXC

Check the kernel support of Linux distribution for LXC

[root@localhost ~]# lxc-checkconfig 
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.18.0-193.el8.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Warning: newuidmap is not setuid-root
Warning: newgidmap is not setuid-root
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points: 

Cgroup v2 mount points: 

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, not loaded
Advanced netfilter: enabled, loaded
CONFIG_NF_NAT_IPV4: enabled, not loaded
CONFIG_NF_NAT_IPV6: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
File capabilities: 

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

After the LXC checkconfig command checks, if "enabled" is displayed for all items, LXC can be used directly

Basic concepts of docker

docker is a front-end tool of container technology. Container is a technology of kernel. docker only simplifies and popularizes the use of this technology.

LXC is as like as two peas in creating a container for large-scale creation. It is also very difficult to duplicate the same container on another host, and docker is looking for solutions from this aspect. Therefore, the core of the early version of docker is an LxC, which is encapsulated by docker. The function is realized by using LxC as the container management engine. However, when creating a container, it is no longer installed on site with a template like LxC, but packaged an operating system into an image through a similar image technology in advance, just like in KVM, Then copy the image to the target host and deploy it directly.

We can try to prepare and arrange all the components needed in the user space of an operating system in advance, and then package them into a file. This file is called the image file.

Docker's image files are placed in a centralized and unified Internet warehouse. Some commonly used image files are placed in the Internet warehouse, such as the minimized centos system. Sometimes we need to install some applications on the operating system, such as nginx. We can install an nginx in a minimized centos system and package it into an image, Put it in the Internet warehouse. When people want to start a container, docker will go to the Internet warehouse to download the image we need locally, and start the container based on the image.

Since docker version 0.9, in addition to continuing to support LXC, docker also began to introduce its own libcontainer in an attempt to build a more general underlying container virtualization library. Today's dockers basically use libcontainer instead of LXC.

In terms of operating system functions, the core technologies underlying docker mainly include Linux operating system namespace, control group, federated file system and Linux virtual network support.

How docker works

In order to make the use of containers easier to manage, docker runs only one business process in a user space and only one process in a container. For example, if we want to install an nginx and a tomcat on a host, nginx runs in the container of nginx and tomcat runs in the container of tomcat, The two communicate with the communication logic between containers.

LXC uses a container as a user space. When it is used as a virtual machine, N processes can be run in it, which makes it very inconvenient for us to manage in the container. docker uses this restrictive way to run only one process in a container, which makes the container management more convenient.

Advantages and disadvantages of using docker:

  • Deleting one container does not affect other containers
  • Debugging is inconvenient and takes up space (each container must have its own debugging tools, such as ps command)
  • It is easy to distribute. In a real sense, it can be written at one time and run everywhere, which is more thorough than the cross platform of java
  • It is easy to deploy. No matter what the underlying system is, as long as there is a docker, run it directly
  • Layered construction, joint mount

    If there is data in the container, it is called stateful, and if there is no data, it is called stateless. In the use of containers, we should be ashamed of statefulness and proud of statelessness. The data should not be placed in the container, but should be placed in the external storage. The data can be stored by attaching it to the container

docker container orchestration

When we want to build an lnmp architecture, there will be dependencies between them. Which application should be started at what time and before or after whom. This dependency should be defined in advance. It is best to implement it in a certain order. Docker itself does not have this function, so we need one based on docker, This function is called container orchestration, which can reflect the dependencies, dependencies, subordinations, etc. between applications in the order and management logic at startup and shutdown.

With docker, the publishing work of operation and maintenance must be arranged through the arrangement tool. If there is no arrangement tool, the operation and maintenance personnel want to manage the container, which is actually more troublesome than the direct management program, which increases the complexity of operation and maintenance environment management.

Common container orchestration tools:

  • Machine + Swarm (manage N docker hosts as one host) + compose (single machine orchestration)
  • Mesos (realizing unified resource scheduling and allocation) + marathon
  • kubernetes --> k8s

Tags: Docker

Posted on Wed, 01 Dec 2021 06:17:28 -0500 by TurtleDove