Docker knowledge summary

My blog address:

1. Docker introduction

Docker is a Open Source The application container engine allows developers to package their applications and dependency packages into a portable image And then publish to any popular Linux or Windows The operating system can also be implemented on the machine Virtualization . The container is fully used sandbox Mechanism, there will be no interface between them.

Dcoker is developed based on Go language and is an open source project.

Official website:

Warehouse: , equivalent to GitHub, can publish images

1.1 why does docker appear?

When developing a product, you need to have two sets of development and online environments. Different environments need to use different configurations.

There will be two roles in the company: development and operation and maintenance.

Problem: after my developers have developed a product, they can run it on their own computer. If the version is updated, the server will be unavailable, which is very troublesome for the operation and maintenance personnel.

The environment configuration is very troublesome. Each machine has to deploy the environment (cluster Redis, ES, Hadoop) and so on, which is very time-consuming and laborious.

A project is published in the form of jar, but it depends on some environments (Rdis, MySQL, JDK, etc.), and the project cannot be installed and packaged with the environment.

Suppose an application environment Redis, MySQL and JDK is configured on the server before. The configuration is very troublesome. It is not possible to develop the system Windows and publish it to Linux across platforms

In the traditional development, developers make jar packages, and operation and maintenance personnel deploy and go online

Now: the development is packaged and deployed online, and a set of processes is completed.

Suppose you develop an apk application

  • java – apk – publish (app store) -- users can use apk – install and use
  • java - jar (environment) -- package the project with the environment (image) - (Docker warehouse) -- download the published image -- just run it directly.

Docker proposes solutions to the above problems.

Docker's idea comes from the container.

JRE - multiple applications (Port conflict) - they were all cross.

Isolation: the core idea of docker is packing and packing. Each box is isolated from each other. Docker can make use of the server through the isolation mechanism.

1.2 Docker history

In 2010, several young people working in IT set up a company "dotCloud" in the United States

The company started to do some pass cloud computing services. LXC related container technology. They named their own technology (container technology) Docker. When Docker was just born, it didn't attract the attention of the industry, so the company can't stick to it.

So I chose open source. Docker was open source in 2013. After open source, more and more people found the advantages of docker. After the fire, docker will update a version every month. Docker1.0 was released on April 9, 2014.

Before the emergence of container technology, we all used virtual machine technology.

Virtual machine: install a vmware in windows. Through this software, we can virtual one or more computers, which is very cumbersome.

Virtual machine also belongs to virtualization technology; Docker container technology is also a virtualization technology.

1.3 what can docker do

Virtual machine technology

A computer has a kernel and a library needed to run, and then all the developed software runs on this computer.

Disadvantages of virtual machine technology:

  1. It takes up a lot of resources
  2. There are many redundant steps (manual startup is required for each use)
  3. There are too many open clients, and the startup is very slow

Containerization Technology

Containerization technology is not a complete operating system for simulation.

  • Compare different

Virtual machine is to create a set of hardware, run a complete operating system, and install and run software on this system.

The applications in the container run directly on the content of the host. The container does not have its own kernel or virtual hardware, so it is very lightweight. Each container is isolated from each other, and each container has its own file system, which does not affect each other.

2. Docker installation

2.1 basic composition of docker

  • image:

The image is like a template, which can be used to create container services. Suppose there is a Tomcat image, which runs through the run method, and the tomcat01 container (provides the server). Through this image, multiple containers can be created (the final service run or project run is in the container).

  • container:

Using container technology to run one or a group of applications independently and create them through image, we can assume that the container is a simple Linux system

  • repository

It is used to store images. The warehouse is divided into public warehouse and private warehouse

Docker Hub is foreign by default. Alibaba cloud and Tencent cloud all have container servers

2.2 installation of Dcoker

  • Environmental preparation

A Linux server, CentOS 7


Reference address:

  1. Uninstall old version
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
  1. Install required packages
sudo yum install -y yum-utils
  1. Set up a mirrored warehouse
# Foreign, very slow
sudo yum-config-manager \
    --add-repo \
# Alicloud image installation address
yum-config-manager --add-repo

# Update yum package index
yum makecache fast
  1. Install docker related
sudo yum install docker-ce docker-ce-cli

yum install docker-ce-18.06.1.ce-3.el7 docker-ce-selinux-18.06.1.ce-3.el7

sudo yum install docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7
  1. Start docker
systemctl start docker
  1. Use docker version to check whether the installation is successful

  1. Test Hello,World
docker run hello-world

  1. Check out the downloaded Hello world image
docker images

  1. Uninstall docker
# Unload dependency
yum remove docker-ce docker-ce-cli
# Delete running environment
rm -rf /var/lib/docker
/var/lib/docker //Default working path of docker

2.3 operation process of dcoker run

How does Docker work?

Docker is a client server system. Docker is a daemon running on the host and accessed from the client through the Socket. The Docker Server will execute the docker client command after receiving it.

Why is Dcoker faster than VM?

  1. Docker has fewer abstraction layers than virtual machines
  2. Dcoker uses the kernel of the host, and vm needs GuestOS

When creating a new container, docker does not need to reload an operating system kernel like a virtual machine to avoid booting. The virtual machine is loaded with GuestOS at the minute level; Docker uses the host's operating system, omits the complex boot process, and is second level.

3. Common docker commands

3.1 help command

docker version	#Displays the version information of docker
docker info		#Displays the system information of docker, including the number of images and containers
docker command --help #Help information

Help document address:

3.2 mirror command

  • docker images

    View mirrors on all local hosts

REPOSITORY          TAG                 IMAGE ID            CREATED      SIZE
hello-world         latest              feb5d9fea6a5        4 weeks ago  13.3kB
# explain
REPOSITORY	Mirrored warehouse source
TAG			Mirrored label
IMAGE ID	mirrored  id
CREATED		Creation time of the image
SIZE		Mirror size

# Command options
  -a, --all           	List all mirrors
      --digests         Show digests
  -f, --filter filter   Filter output based on conditions provided
      --format string   Pretty-print images using a Go template
      --no-trunc        Don't truncate output
  -q, --quiet           Show only mirrors id
  • docker search

    Search image

docker search mysql

NAME                   DESCRIPTION                                     STARS       OFFICIAL     AUTOMATED
mysql                  MySQL is a widely used, open-source relation...   11587               [OK]
mariadb                MariaDB Server is a high performing open sou...   4407                [OK]
mysql/mysql-server     Optimized MySQL Server Docker images. Create...   857                 [OK]

# Optional, filter
	--filter=STARS=3000		The searched image is STARS Greater than 3000
  • docker pull

    Download Image

docker pull mysql [:tags]  # Download the latest [: tags] specified Download

Using default tag: latest # If you do not write a tag, the default is the latest version
latest: Pulling from library/mysql
b380bbd43752: Pull complete	# Layered download, the core of docker image, and the federated file system. These layers are down this time. If another version is specified below for download, it will not be downloaded if there are duplicates
f23cbf2ecc5d: Pull complete
30cfc6c29c0a: Pull complete
b38609286cbe: Pull complete
8211d9e66cd6: Pull complete
2313f9eeca4a: Pull complete
7eb487d00da0: Pull complete
4d7421c8152e: Pull complete
77f3d8811a28: Pull complete
cce755338cba: Pull complete
69b753046b9f: Pull complete
b2e64b0ab53c: Pull complete
Digest: sha256:6d7d4524463fe6e2b893ffc2b89543c81dec7ef82fb2020a1b27606666464d87  # autograph
Status: Downloaded newer image for mysql:latest
  • docker rmi

    To delete an image, you can delete it by image id or image name

docker rmi -f IMAGE ID # Delete the specified container, or delete multiple containers through space intervals
docker rmi -f $(docker images -aq) # Delete all containers

3.3 container command

Note: we can create containers only after we have images

Test and download a CentOS image

docker pull docker  # the latest version

Create a new container and start

docker run [Optional parameters] image

# Parameter description
--name="Name"   The name of the container after it is started, which is used to distinguish the container
-d				Run in background mode
-it				Run in interactive mode and enter the container to view the content
-p				Specifies the port of the container  -p 8080
	-p	# ip: host port: container port
	-p	# Host port: mapping to container port (common)
	-p	# Container port
-P				Randomly assigned port

# Use container
# Start and enter the container
docker run -it centos /bin/bash

# Back from container to host
exit	Container stop and exit
Ctrl + P + Q   The container does not stop exiting

# List all running containers
docker ps  List running containers
docker ps -a  Lists currently running containers + History run container

-n=?	Displays recently created containers
-q		Displays only the number of the container

Delete container

docker rm container id		# Deletes the container with the specified id
docker rm -f $(docker ps -aq)		# Delete all containers

Start and stop containers

docker start container id
docker restart Allow	implement id
docker stop container id
docker kill container id

3.4 other common commands

Problem: when we use the background startup container centos, docker run -d image name, and use docker ps to find that centos has stopped. Here is a common problem. If the container runs in the background, there must be a foreground process. Docker will stop automatically if it finds no application; If the nginx container starts and finds that it does not provide services, it will stop immediately and there will be no program.

  • docker logs

    View log command

docker logs -f -t --tail 5 container    View the log information of a container

  • docker top

    View process information in the container

docker top container id
  • docker inspect

    View the metadata of the image

docker inspect

  • docker exec -it container id bashShell

    After entering the container, open a new terminal, which can be operated inside (commonly used)

    Enter the currently running container

    We usually run the container in the background mode. We need to enter the container and modify some configurations

The docker attach container id enters the terminal where the container is executing and will not start a new process.

  • docker cp

    Copy files from container to host

docker cp container id:In container path destination host path

3.5 summary

attach		# The attach connection under the current shell specifies the running image
build		# Customized image through Dockerfile
commit		# Commit the current container as a new image
cp			# Copy the specified file or directory from the container to the host
create		# Create a new container, the same as run, but do not start the container
diff		#View docker container changes
events		# Get container real time from docker service
exec		# Run the command on an existing container
export		# Export the content stream of the container as a tar archive [corresponding to import]
history		# Show a mirror formation history
images		# Lists the current image of the system
import		# Create a new file system image from the contents of the tar package [corresponding to export]
info		# Display system related information
inspect		# View container details
kill		# kill specifies the docker container
load		# Load an image from a tar package [corresponding to save]
login		# Register or log in to a docker source server
logout		# Exit from the current Docker registry
logs		# Output current container log information
port		# View the internal source port of the container corresponding to the mapped port
pause		# Provisional container
ps			# List containers
pull		# Pull the specified image or library image from the docker image source server
push		# Push the specified image or library image to the docker source server
restart		# Restart the running container
rm			# Remove one or more containers
rmi			# Remove one or more images [no container can be deleted without using the image; otherwise, you need to delete the relevant container to continue or -f force deletion]
run			# Create a new container and run a command
save		# Save an image as a tar package [corresponding to load]
search		# Search for images in docker hub
start		# Start container
stop		# Stop container
tag			# Label images in source
top			# View the process information running in the container
unpause		# Unsuspend container
version		# View docker version number
wait		# Intercepts the exit status value of the container stop time

4. Installation and use of docker

4.1 Docker installation Nginx

  1. Search image
docker search

You can also search on docker.hub to see the help documents

  1. Download Image
docker pull nginx

  1. Run test
docker run -d --name nginx01 -p 3344:80 nginx

-d		# Background operation
-name	# Container name
-p		# Host port: container internal port

  • Concept of port exposure

4.2 Docker installation Tomcat

  1. Search image
docker search tomcat
  1. Download Image
docker pull tomcat
  1. Run test
docker run -d --name tomcat01 -p 8080:8080 tomcat
  1. Internet access test

    Through the Internet, we can see that the result is 404, because the tomcat downloaded by docker is not a complete version

We use the command to enter the container to view

docker exec -it tomcat01 /bin/bash

whereis tomcat

After entering the tomcat directory, we can see that the webapps directory is empty. Because the deployed project is in this directory, and the folder is empty, it means that no project deployment has been carried out, so 404 pages will appear. This is because of Alibaba cloud image. When downloading, the smallest image is found by default, Therefore, it will eliminate all unnecessary and ensure the minimum runnable environment. We can see a webapps.dist directory under the same level directory of webapps. Copy all the contents of this directory into webapps, and then visit again to see the contents of the home page of tomcat.

5. Visualization panel portal

Docker graphical interface management tool provides a background panel for us to operate.

Open command

docker run -d -p 8088:9000 --restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true portainer/portainer

After downloading, we can access the ip address: 8088 port number through the browser

6. Mirroring

What is image: image is a lightweight and executable independent software package used to package software running environment and software developed based on running environment. It contains all the contents required to run a software, including code, runtime library, environment variables and configuration files.

All applications can be packaged directly into a docker image and run directly.

How to obtain the image: 1. Download it remotely 2. Copy it from others 3. Make an image DockerFile yourself

6.1 UnionFS (Federated file system)

When we download images, we can see layers of downloads

Union FS: Union file system is a layered, lightweight and high-performance file system. It supports the superposition of file system modifications from one submission to another. At the same time, different directories can be mounted under the same virtual file system. Union file system is the basis of Dcoker image. Images can be inherited through layering. Based on the basic image (without parent image), various specific application images can be made.

Features: multiple file systems can be loaded at the same time, but from the outside, only one file system can be seen. Joint loading will overlay all layers of file systems, so that the final file system will contain all underlying files and directories.

6.2 Docker image loading principle

The image of docker is actually composed of a layer by layer file system, which is called UnionFS.

Bootfs (boot file system) mainly includes bootloader and kernel. Bootloader is mainly used to boot and load the kernel. Bootfs file system will be loaded when Linux starts up. Bootfs is at the bottom of Docker image. This layer is the same as our typical Linux/Unix system, including boot loader and kernel. After the boot is loaded, the whole kernel is in memory. At this time, the right to use the memory has been transferred from bootfs to the kernel. At this time, the system will also unload bootfs.

rootfs (root file system), above bootfs. It contains standard directories and files such as / dev, / proc, / bin, / etc in a typical Linux system. rootfs is a variety of operating system distributions, such as Ubuntu, Centos, etc

Usually, the virtual machine CentOS we install is several gigabytes. Why is Docker only a few hundred megabytes?

For a thin OS, rootfs may be very small. You only need to include the most basic commands, tools and program libraries. Because the underlying layer directly uses the Host kernel, you only need to provide rootfs. It can be seen that bootfs are basically the same for different Linux distributions, and rootfs will be different, so different distributions can share bootfs. The virtual machine is minute level and the container is second level.

6.3 mirror layering

When we download an image, we can clearly see that it is downloaded layer by layer in the log output by the terminal.

Why does the Docker image adopt this hierarchical structure?

The biggest advantage is that resources can be shared. For example, if multiple images are built from the same base image, the host only needs to keep one base image on the disk, and only one base image needs to be loaded in memory. In this way, all containers can be served, and each layer of the image can be shared.

To view the image hierarchy, you can use the command: docker image inspect image

All Docker images start from a basic image layer. When modifying or adding new content, a new image layer will be created above the current image layer.

Suppose a new image is created based on Unbuntu Linux, which is the first layer of the new image; If a Python package is added to the image, a second image layer will be created above the basic image layer; If you continue to add a security patch, a third mirror layer will be created.

The following image contains three mirror layers

While adding additional mirror layers, the mirror always remains the combination of all current mirrors. In the following figure, each mirror contains 3 files, while the mirror contains 6 files from two mirror layers

The following figure shows a slightly complex three-tier image. Externally, there are only six files in the entire image, because the top-level file 7 is an updated version of file 5.

In this case, the files in the upper image layer overwrite the files in the lower image layer, so that the updated version of the file is added to the image as a new image layer.

Docker implements the image layer stack through the storage engine (the new version adopts the snapshot mechanism), and ensures that multiple image layers are displayed as a unified file system.

The storage engines available on Linux are AUFS, Overlay2, Device Mapper, Btrfs and ZFS. Each storage engine is based on the corresponding file system or block device technology in Linux, and each storage engine has its unique performance characteristics.

Docker only supports a Windows filter storage engine on windows, which implements layering and CoW based on NTFS file system.

In the following figure, the system displays the same three-layer mirror image. All mirror layers are stacked and merged to provide a unified view.


Docker images are read-only. When the container is started, a new writable layer is loaded on the top of the image. This layer is what we usually call the container layer. The layer below the container layer is called the image layer.

6.4 Commit image

docker commit Submit the container as a new copy

docker commit -m="Information description" -a="author" container id Target image name:[TAG]

In our normal use, we need to modify and expand an image when downloading it. If we want to save the state of the current container, we can submit it through commit to obtain an image, just like the snapshot function of VM virtual machine.

  1. After starting tomcat, there is nothing in webapps, so visit 404
  2. Copy your content into webapps
  3. The following command is used to submit as an image, which can be used later through our modified image
docker commit -m="add webapps" -a="lss" xxxxxx tomcat001:1.0

7. Container data volume

7.1 what is a container data volume

In general: container persistence and synchronization; Containers can also be shared

docker's idea is to package the application and environment into an image. If the data is stored in the container, the data will be lost when our container is deleted. We need to persist the data. The container added to MySQL stores the data. If the container is accidentally deleted, the data will be lost, so we need to store the data locally.

There can be multiple data sharing technologies between containers. The data generated in the Docker container is synchronized to the local, which is the volume technology. The directory is mounted to mount the directory in our container to Linux.

7.2 mount by command

docker run -it -v Host Directory:In container directory

# test
docker run -it -v /home/test:/home centos /bin/bash

After the command is executed, whether the specified directory is modified in the image or the specified directory is modified on the host, the other party can receive the corresponding change information. In this case, the mount is successful.

After startup, we can check the container id through docker inspect

[the external chain picture transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-J9vy6dPh-1635640517571)(D:\notes\Docker\Docker knowledge point summary. assets\image-20211027095035103.png)]

Here, when we stop running the container, modify the contents of the files shared by the host, and start the container again, we will find that the data has also been updated accordingly. A two-way process.

Advantages of using container volumes: after binding, we modify local files and synchronize them in the container.

7.3 installing MySQL

  1. Search image
docker search mysql
  1. Pull image
docker pull mysql:5.7
  1. Start container

    It should be noted here that a password configuration is required to enter mysql

    Details: official website address:

-d		# Background operation
-p		# Port mapping host port: container port
-v		# Volume mount
-e		# environment variable
--name	# Container name

docker run -d -p 3306:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root --name mysql01 mysql:5.7
  1. View the directory where the host is mounted

  1. Using SQL_yog connect to MySQL

    Create a test database

  1. View the contents in the / home/mysql/data directory

  1. Assuming that the container is deleted, we find that the data volume mounted locally is still not lost, which realizes the persistence function of the container

7.4 named mount and anonymous mount

7.4.1 anonymous mount
-v In container path
docker run -d -p 80:80 -v /etc/nginx nginx
View all volume situation
docker volume ls
 It is found here that this is anonymous mount. We are -v Only the path inside the container is written, and the path outside the container is not written

7.4.2 named mount
-v Volume name:In container path

docker run -d -p 80:80 -v nginx01:/var/nginx nginx

7.4.3 volume mount path view

All volumes in the docker container are in / var / lib / docker / volumes / XXXX without a specified directory/_ Data, we can easily find one of our volumes through named mount. In most cases, named mount is used

7.4.4 determine the mounting method

How to determine whether to mount by name, anonymously, or by a specified path?

-v In container path			# Anonymous mount
-v Volume name:In container path		   # Named mount
-v /Host path:In container path		# Specified path mount
  • read-write permission
docker run -d -p 80:80 -v nginx01:/var/nginx:ro nginx
docker run -d -p 80:80 -v nginx01:/var/nginx:rw nginx

#Change read and write operations through - v path in container: RO RW
ro		readonly #read-only 		 This indicates that this path can only be operated through the host, and cannot be operated inside the container
rw		readwrite	#Readable and writable
#When the permissions of these two containers are set, the container limits the contents we mount.

7.5 simple use of DockerFile

DockerFile is the build file used to build the docker image, which is a command script

Create a dockerfile01 script file in the / home / docker test volume directory. This script can generate an image, and each command in it is a layer

# The contents of the file. Each command in the file is a layer of the image
FROM centos
VOLUME ["volume01","volume02"]
CMD echo "---end---"
CMD /bin/bash

Use the command to build a mirror

Under the same level directory of dockerfile01 file

docker build -f dockerfile1 -t lss/centos .

Use the command docker images to view the image we just built

Start your own container

docker run -it a542aa955836 /bin/bash

Create a container.txt file under volume01 in the container

There must be a synchronous directory between the volume and the external. Here is anonymous mount

Use the command docker inspect 5c26b2ee40af to view detailed information

7.6 data volume container

Multiple containers for data synchronization


  1. Start centos mirroring
docker run -it --name centos01 lss/centos

  1. Starting an lss/centos image

    Data synchronization is achieved by -- volumes from mounting to the last started container

--volumes-from  # Equivalent to an inherited relationship
docker run -it --name centos02 --volumes-from centos01 lss/centos

  1. Create a file in the volume01 folder of centos01

    After entering centos02, you can also see this file in the volume02 folder, realizing data synchronization between containers.

    Because this image is created by yourself, it is set in the DockerFile above. Only volume01 and volume02 folders are set as shared volumes, so the contents in other folders cannot be shared.

Add a centos03 and mount it on centos01. After deleting centos01, these shared files will not be lost on centos02 and centos03. This file is a two-way copy.


For the transfer of configuration information between containers, the life cycle of the data volume container continues until no container is used. However, once the data is synchronized to the local, the local data will not be deleted.

8. DockerFile

Dockerfile is a file used to build a docker image. It generates an image file and publishes it, which can be understood as a command parameter script.

8.1 DockerFile introduction

Check the official website:

This file contains some build commands

Many official website images are basic packages, and many functions are not available. We usually build our own images.

8.2 DockerFile construction and instructions

Building requires a lot of instructions

8.2.1 Foundation
  • Each keyword (instruction) must be uppercase

  • Execute from top to bottom

  • (#) indicates a comment

  • Each instruction creates and commits a new mirror layer

  1. Write a dockerfile file

  2. docker build builds into an image

  3. docker run run image

  4. docker push publishing images (DockerHub, Alibaba cloud image warehouse)

DockerFile is development oriented. If you want to publish a project and make an image in the future, you need to write a DockerFile file.

Steps: development, deployment, operation and maintenance

DockerFile: build a file that defines all the steps and source code

DockerImages: build the generated image through DockerFile, and finally publish and run it

Docker container: a container is an image that runs to provide services

8.2.2 DockerFile instruction

FROM		# Basic image, everything starts from here
MAINTAINER	# Who wrote the image name + email
RUN			# Commands to run during image construction
ADD			# Step: tomcat image, this tomcat compressed package, and add content
WORKDIR		# Mirrored working directory
VOLUME		# Mounted directory
EXPOSE		# Reserved port configuration
CMD			# Specify the commands to be run when the container starts. Only the last one will take effect and can be replaced
ENTRYPOINT	# Specify the command to run when the container starts, and you can append the command
ONBUILD		# When an inherited DockerFile is built, the ONBUILD instruction will be run to trigger the instruction
COPY		# Similar to ADD, copy our files to the image
ENV			# Setting environment variables during construction

8.3 DockerFile build image

99% of the images in Docker Hub are from this basic image: FROM scratch, and then configure the required software and configuration to build.

  1. Writing Dockerfile files
FROM centos

ENV MYPATH /usr/local

RUN yum -y install vim
RUN yum -y install net-tools


CMD echo "---end---"
CMD /bin/bash

  1. Build an image from this file
docker build -f file name -t Image name:[tag] .

docker build -f mydockerfile-centos -t centos:1.0 .   # Pay attention to the back. It can't be less

  1. test run
docker run -it mycentos:1.0  # Enter the image created and built by ourselves

Here, we have the command to install net tools and vim in the DockerFile file, so we can only use it in the image; The native download of the official website cannot be used.

8.4 difference between CMD and ENTRYPOINT

  • CMD: Specifies the command to be run when the container is started. Only the last one will take effect and can be replaced
  • ENTRYPOINT: Specifies the command to run when the container is started. You can append the command

Test CMD

  1. Create a file dockerfile CMD test
FROM centos
CMD ["ls","-a"]
  1. Build image docker build - F dockerfile CMD test - t cmdtest

  1. Run the image docker run 037860b45b64. When running, the ls -a command takes effect

  1. If you want to append a command to this image, docker run 037860b45b64 -l will make an error

	because cmd In the case of,-l Replaced CMD ["ls","-a"] Orders,-l Not an order, so an error is reported

Here you can use docker run 037860b45b64 ls -l to view


  1. Create a dockerfile entrypoint test file
FROM centos
# Differences from CMD
ENTRYPOINT ["ls","-a"]
  1. Build image docker build - F dockerfile entrypoint test - t entrytest
  2. Run the test docker run 368a821b1dcb

  1. A parameter docker run 368a821b1dcb -l is appended during execution

    It can still be executed normally. It is directly spliced behind ENTRYPOINT

8.5 building a tomcat image

  1. Prepare the image file, tomcat compressed package and jdk compressed package

  1. Write a Dockerfile file and name the Dockerfile on the official website. In this way, the file will be found automatically when build ing. There is no need to specify it with - f
FROM centos

COPY README.txt /readme.txt
ADD jdk-8u311-linux-x64.tar.gz /usr/local/
ADD apache-tomcat-9.0.54.tar.gz /usr/local/

RUN yum -y install vim

ENV MYPATH /usr/local

ENV JAVA_HOME /usr/local/jdk1.8.0_11
ENV CLASSPATH $JAVA_HOME/bin/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.54
ENV CATALINA_BASE /usr/local/apache-tomcat-9.0.54


CMD /usr/local/apache-tomcat-9.0.54/bin/ && tail -F /usr/local/apache-tomcat-9.0.54/bin/logs/catalina.out
  1. structure
docker build -t dirtomcat .		# Since the file name is Dockerfile, you do not need to use the - f parameter to find the file
  1. View the built image file
docker images

  1. Start mirror test
docker run -d -p 8080:8080 --name lss-tomcat -v /opt/build/tomcat/webapps:/usr/local/apache-tomcat-9.0.54/webapps/test -v /opt/build/tomcat/tomcat-logs/:/usr/local/apache-tomcat-9.0.54/logs diytomcat

  1. Write a project locally for publishing

    The volume is mounted when the image is started, so we can write the project locally to realize synchronous publishing

    Create a WEB-INF folder under the / opt/build/tomcat/webapps / folder (above -v mounted), and create a web.xml file under this folder

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns=""

Create an index.jsp file in the tomcat directory

<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<h2>Hello Tomcat Docker!</h2>
        System.out.println("--- tomcat test ---");

So far, the project release has been realized

  1. test

View log file output information

9. Publish image

9.1 publish to DockerHub

  1. Register an account in docker.hub and log in

  2. Commit the mirror on the server

    Sign in

docker login -u lishisen -p ******

  1. After logging in, you can submit the image
# The images created between are rejected when submitted because they do not have a version number
# Add a tag
docker tag container id lishisen/tomcat:1.0
# Then the submission is successful
docker push lishisen/tomcat:1.0

You can see that the submission is also conducted according to the image level.

9.2 publishing to alicloud

  1. Log in to alicloud
  2. Container mirroring service found

  1. Create a namespace

    This namespace can be a big project

  1. Create mirror warehouse

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-jg6a2qWo-1635640517631)(D:\notes\Docker\Docker knowledge point summary. assets\image-20211028101215250.png)]

  • There are detailed operation steps in the operation guide

Follow the operating instructions

  1. Sign in
docker login --username=Li Shisen 111

  1. Generate a version number for the image
docker tag [ImageId][Mirror version number]

  1. push upload
docker push[Mirror version number]

10. Docker network

10.1 Docker0

Question: how does docker handle container network access?

There is a tomcat container and a mysql container. How do the two containers communicate with each other and which network is used for communication?

View the ip addresses in the same container that can be ping ed

  1. Run a container
docker run -d -P --name tomcat01 tomcat
  1. View the internal ip address of the container

    Add the ip addr command when running the container

docker exec -it tomcat01 ip addr
# Note that if the image downloaded here cannot be used, it may be that this function is not available in the image. You can rebuild an image file yourself by installing the Dcokerfile file of net tools above

After startup, we can see the internal network address of the container. When the container starts, we will get an address eth0@if33 The ip address assigned by docker.

We can Ping this ip address by using the ping command


Every time we start a docker container, docker will assign an ip address to the docker container; On the host machine, as long as we install docker, there will be a network card of docker0, which uses the bridging mode and Veth pair technology.

After the above container is started, use ip addr on the host to view the information about the ip of the container just started.

Every time we start a container, there will be one more network card. The network cards brought by the started container appear one-to-one, which is the Veth pair Technology: a pair of virtual device interfaces, which appear in pairs and are connected to each other.

Because of this feature, Veth pair acts as a bridge to connect all kinds of virtual network devices and realize the interconnection between host and container.

In addition, we have opened two containers tomcat01 and tomcat02. The connectivity between the two containers can also be tested through the ping command, so the containers can also communicate with each other.

The communication between the two containers is not direct communication. When Docker is installed, there will be a network card of docker0, which is equivalent to a router. When tomcat01 and tomcat02 establish a connection, the interconnection between the two network cards is realized by Veth pair technology, and then the router sends the connection through broadcasting or registering IP address.

tomcat01 and tomcat02 share the same router docker0.

When all containers do not specify a network, they are routed by docker0. Docker will assign a default available ip to our containers

Docker uses the Linux bridge. The host is a docker container bridge docker0

All network interfaces in Docker are virtual, because the forwarding efficiency of virtual interfaces is high.

As long as the container is deleted, the corresponding pair of bridges will disappear

10.2 link

Scenario: there is a micro service, database url = ip:, each time you start MySQL with Docker, a new ip will be assigned. If the ip changes, the address in the project will become invalid. We change the ip address of the database without restarting the project. We hope to access the container through the name to solve this problem. Use -- link in Docker to solve this problem

  1. Start two Tomcat: tomcat01 and tomcat02

View the ip addresses of the two tomcat network cards respectively

ping each other's ip addresses can be communicated

However, the name of the ping container cannot be pinged

  1. Connect using -- link

    Then start a tomcat03 container and connect with -- link

docker run -d -p 8083:8080 --name tomcat03 --link tomcat01 tomcat

Test whether it can ping through the container name

Starting tomcat03, you can ping the container name through tomcat01

**Note: * * on the contrary, tomcat01 ping tomcat03 will fail. The forward connection is OK, and the reverse connection may not be

  1. View network card details
docker network ls

docker inspect From the last command id

You can see that an ip address is assigned to each started container

  1. Different reasons for reverse ping

tomcat03 configures the configuration of tomcat02 locally

You can see by looking at the hosts file of the tomcat03 container

docker exec -it tomcat03 vim /etc/hosts
# You can see the mapping relationship between ip address and container name in the hosts of tomcat03

On the contrary, the mapping relationship cannot be seen in the hosts file of tomcat01, so tomcat01 cannot ping tomcat03 through the container name

--The operation of link is actually to configure a mapping relationship between ip address and container name in hosts. This method is no longer used.

docker0 problem: connection access by container name is not supported

10.3 custom network

View all docker networks through the command docker network ls

Network mode:

  • Bridge: bridge mode (default)
  • none: do not configure the network
  • Host: host mode, sharing the network with the server
  • Container: network connectivity in the container (with great limitations)

Building custom networks

# The command to start directly has a -- net bridge by default, which is docker0
docker run -d -P --name tomcat01 [--net bridge] tomcat

# docker0 features: by default, the domain name cannot be accessed. Interconnection can be realized by using -- link
  1. Customize a network
# --driver bridge 	 Network mode, bridge, the default is bridge
# --subnet 	 Subnet, which can contain 255 * 255 subnets
# --gateway 		 gateway
# mynet 		 Custom network name

docker network create --driver bridge --subnet --gateway mynet

Your own network will be created

  1. Start the tomcat container through our own defined network
docker run -d -p 8081:8080 --name tomcat01 --net mynet tomcat
docker run -d -p 8082:8080 --name tomcat02 --net mynet tomcat

## Use docker inspect mynet to view details

  1. Test whether it can ping

    After testing, you can ping through ip address and container name

    Not applicable now -- link can also be ping ed

Our customized network docker has helped us maintain the corresponding relationship. It is recommended to use the customized network


redis: different clusters use different networks to ensure that the cluster is safe and healthy

mysql: different clusters use different networks to ensure that the cluster is safe and healthy

10.4 network connectivity

A user-defined network is created above. The network segment is docker0 comes with

The two network segments cannot be connected

There are two tomcat containers on docker0 and two tomcat containers on mynet. The containers of these two network segments cannot be ping ed. Therefore, the containers on docker0 need to be connected to mynet to realize the interconnection of containers of two different network segments.


tomcat01 container connects to mynet network

docker network connect mynet tomcat01
docker inspect mynet	# View details
# In the following figure, we can find that tomcat01 is directly placed under mynet after connection
# Official website: one container with two ip addresses 

docker exec -it tomcat01 ping tomcat-net-01

If you want to operate others across the network, you need to use docker network connect to connect

11. Deploy Redis cluster

  1. Create a redis network card
docker network create redis --subnet

  1. Create 6 redis configurations through scripts
for port in $(seq 1 6);
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >>/mydata/redis/node-${port}/conf/redis.conf
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes

  1. Start 6 redis containers
for port in $(seq 1 6);
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} -v /mydata/redis/node-${port}/data:/data -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

  1. Create cluster
docker exec -it redis-1 /bin/sh		# Enter the redis-1 node

# Execute the command to create a cluster
redis-cli --cluster create --cluster-replicas 1

  1. Test whether the cluster is set up successfully

set a b sets a value in the cluster to display the host of the host. Then use the command docker stop redis-3 to stop the container. After entering the cluster to node, 14 is the slave of 13, and the host hangs up

docker completes setting up the redis cluster.

12. Spring Boot microservice packaging Docker image

  1. Create a Spring Boot project

    The Controller requests to return Hello, Docker

  2. Packaged application

    Package the project package into a jar package and upload the jar package to the Linux virtual machine

  1. Write Dockerfile
FROM java:8

COPY *.jar /app.jar

CMD ["--server.port=8080"]


ENTRYPOINT ["java","-jar","/app.jar"]
  1. Build mirror
docker build -t demo .

  1. Publish run

13. Docker Compose

13.1 introduction

Official website:

In the above operations, if we want to build and use an image, we need to do the following: Dockerfile – build – run, manual operation, and a single container.

If there are hundreds of services in a large microservice project and there are dependencies between these services, the implementation of the above operation will be very cumbersome. Docker Compose is a container for easy management. It can define and run multiple containers.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.

Compose works in all environments: production, staging, development, testing, as well as CI workflows. You can learn more about each case in Common Use Cases.

Using Compose is basically a three-step process:

  1. Define your app's environment with a Dockerfile so it can be reproduced anywhere.
    • Create Dockerfile to ensure that the project can run anywhere
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
    • Define the service through the docker-compose.yml configuration file
    • Services: container, application (web, mysql, redis...)
  3. Run docker compose up and the Docker compose command starts and runs your entire app. You can alternatively run docker-compose up using the docker-compose binary.
    • Start project

Role: batch container orchestration

Compose is an official open source project of Docker, which needs to be installed before use.

Dokcerfile allows programs to run anywhere and simplifies the deployment of operation and maintenance. Suppose there is a web service that requires multiple containers: redis, mysql, nginx... If we build one by one, it will be very troublesome.

You can write a Compose to package these services in batches

version: "3.9"  # optional since v1.27.0
    build: .
      - "5000:5000"
      - .:/code
      - logvolume01:/var/log
      - redis
    image: redis
  logvolume01: {}
# The containers arranged in this file can be started with one click through docker compose up
# All these services run as a complete project (a set of associated containers)

13.2 installing Compose

Official website address:

  1. download
# The download address of the official website is very slow
sudo curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Domestic image
curl -L`uname -s`-`uname -m`  > /usr/local/bin/docker-compose

  1. Authorize files
sudo chmod +x /usr/local/bin/docker-compose
  1. Confirm successful installation
docker-compose version

13.3 use

Official website:

Official website case: python application, counter, counting with redis

  1. Create an app

    Is to create a directory

mkdir composetest
cd composetest
  1. Create file
import time

import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)

def get_hit_count():
    retries = 5
    while True:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1

def hello():
    count = get_hit_count()
    return 'Hello World! I have been seen {} times.\n'.format(count)
  • Create a required dependency package requirements.txt
  1. Create Dockerfile file
# syntax=docker/dockerfile:1
FROM python:3.7-alpine
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]

# The example given on the official website above may be a version problem. If the implementation fails, you can use the following content
FROM python:3.6-alpine
ADD . /code
RUN pip install -r requirements.txt
CMD ["python",""]
# Starting with the python 3.6 image, build the image
# Add the current directory to the path in the. / code image
# Set the working directory to / code
# Installing dependencies for python
# Set the default command of the container to Python
  1. Create a docker-compose.yml file
version: "3.9"
    build: .
      - "5000:5000"
    image: "redis:alpine"
# The example given on the official website above may be a version problem. If the implementation fails, you can use the following content
version: "3.8"
    build: .
      - "5000:5000"
      - .:/code
    image: "redis:alpine"
# Two web services and redis are defined from the Compose file
# Use the image built in the Dockerfile current directory
# Forward the public port 5000 on the container to port 5000 on the host
# The redis service uses the public redis image extracted from the Docker Hub registry

  1. Run test

    Execute the following command in the folder with docker-compose.yml

docker-compose up

Successfully started, including two services, web and redis

technological process:

  1. Create network
  2. Execute Docker-compose.yml
  3. Start the service. After starting the service, you can see that two services have been created

The file name we created above is composetest, and two services are created according to the information in the docker-compose.yml configuration file

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-n6yB7ydl-1635640517693)(D:\notes\Docker\Docker knowledge point summary. assets\image-20211029094555117.png)]

The names of these started services are automatically generated and are some default rules.

After startup, there are two services. You can use curl access to get the return value

All the dependencies in docker compose have been downloaded for us

docker service ls
# This command is to view the services in the cluster. Because the above example is not a cluster, an error will be reported

Default service name: file name_ Service name_ num

If there are multiple servers and clusters, one of our services may run on server A or server B

_ num indicates the number of copies.

If we need to run redis service, there are 4 copies

  1. Network rules

The contents of the project are all under the same network, so they can be accessed through the domain name.

Command: docker network inspect composetest_default to view the details of network information

  1. Out of Service

Shortcut keys: Ctrl + C

Enter the composetest directory and execute the command: docker compose down

13.4 Compose write configuration rule yaml

Official website address:

The core of Compose is the docker-compose.yaml configuration file

  • The docker-compose.yaml configuration file has three layers
version: ''   # The version corresponds to the docker engine
services: # service
	Service 1: web
		# Service configuration
	Service 2: redis
# Other configurations network / volume, global configuration
  1. version

    Version information is based on the information given on the official website

  1. services

Note that it depends on this configuration information

Because the image is started in order, if the above web needs to start redis first

13.5 deploy WP bok with Compose

Official website case:

  1. Create project directory
  2. Write the docker-compose.yml configuration file
version: "3.9"
    image: mysql:5.7
      - db_data:/var/lib/mysql
    restart: always
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
      - db
    image: wordpress:latest
      - wordpress_data:/var/www/html
      - "8000:80"
    restart: always
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
  db_data: {}
  wordpress_data: {}
  1. start-up
docker-compose up -d
  1. see

13.6 actual combat – build your own micro service

  1. Write project microservices
  2. Dockerfile build image
FROM java:8

COPY *.jar /app.jar

CMD ["--server.port=8080--"]


ENTRYPOINT ["java","-jar","/app.jar"]
  1. docker-compose.yaml orchestration project
version: 3.9

    build: .
    image: lssapp
      - redis
      - "8080:8080"
    image: "redis:alpine"
  1. Package the project on linux
docker-compose up	# start-up

  1. Start test

If the project needs to be rebuilt, you can use this command docker compose up -- build

14. Docker Swarm

14.1 environmental construction

You can purchase four servers in alicloud for cluster operation and choose to charge by volume

You can also open four virtual machines locally to simulate cluster operation

Install docker on 4 servers

14.2 introduction

Official website address:

Working mode:

Docker Engine 1.12 introduces swarm mode that enables you to create a cluster of one or more Docker Engines called a swarm. A swarm consists of one or more nodes: physical or virtual machines running Docker Engine 1.12 or later in swarm mode.

There are two types of nodes: managers and workers . two nodes: management node and work node

All operations are in the management node

14.3 operation

# Configure the cluster and make the docker01 node a master node
# Initialize node
docker swarm init --advertise-addr

# Two commands i can be obtained after initial completion
# Join a node
docker swarm join
# Get a token and let other nodes join through the token
docker swarm join-token manager
docker swarm join-token worker

# Execute the command on the docker02 node, add the node to docker01, and the former one is a worker
docker swarm join --token SWMTKN-1-10kwelhmau8q2l45uqp8jep709ns2mlro1csndykeeigvkl0vx-873o4olo10103ctckdl2ppadh

# Execute the command on the docker01 master node to see the node information of the cluster
docker node ls

Join docker03 into the cluster and initialize it to manage; Join docker04 into the cluster and initialize it as worker;

Cluster setup completed!!!

14.4 Raft agreement

The two master nodes and two slave nodes built in the above cluster assume that one node is hung. Are other nodes available?

Raft protocol: ensure that most nodes survive before they can be used.

If the docker01 node goes down, other nodes, including another management node, cannot be used

After the docker01 node is started again, we find that the Leader has become the docker03 node

# Use the command to leave a node
docker swarm leave

# Result: Node left the swarm

Add the docker02 node to the management node, stop the management node docker01 again, and other management nodes can still operate. There are two remaining management nodes

There is no way to stop one management node again and leave one management node

Therefore, in order to ensure the availability of the cluster, there are three master nodes. Only when > 1 management node is alive can the cluster be used normally (only when most nodes are alive can it be used and highly available)

14.5 Swarm cluster elastic creation service

Elasticity, expansion and contraction, cluster

A project previously started with docker compose up is also stand-alone.

Everything under the cluster goes into swarm, and everything becomes a docker service.

Containers become services

Suppose redis needs to start three containers. For a single entity, it starts three containers.

In the cluster state, in order to ensure high availability, web applications are connected to redis. There are three redis distributed on different machines. It is impossible to access this machine through ip address, because the ip address of docker will change, so it needs to be accessed through service name. This cluster shields the differences of low-level nodes, The cluster can be accessed only through the service name. A redis service may have multiple copies open. Suppose one is dead, the others can still be used.


# Start a nginx service
docker service create  -p 8888:80 --name my-nginx nginx

docker run  The container starts without expansion and contraction capacity
docker service Service startup, capacity expansion and contraction, etc

View the service, start only one container and only one copy

In this case, if the traffic is very large, one service may not be able to carry it, and several more services need to be opened, so dynamic capacity expansion should be achieved

Dynamic expansion and contraction capacity

# Create 3 replicas for my nginx service (expand or shrink capacity)
docker service update --replicas 3 my-nginx
# You can also use this command to expand or shrink the service
docker service scale my-nginx=10

The started multiple replicas are allocated to each host in the cluster, and these services can be accessed by any node in the cluster. The service can have multiple replicas, dynamically expand and shrink capacity, and achieve high availability.

14.6 concept summary

Swarm: cluster management and arrangement. docker can initialize a swarm cluster and other nodes can join. There are two roles: management and work.

Node: a docker node. Multiple nodes form a network cluster. There are two roles: management and work

Service: it is a task that can be run in the management node or work node. It is the core and can be accessed by users. Startup mode: docker service

Task: the command in the container is a detailed task. Containers are created layer by layer

Command -- > Management -- > API -- > scheduling -- > work node (create Task container maintenance)

Service replica and global service

All nodes are allocated, and both management nodes and work nodes can run projects

Projects can be divided into global run and run only on replicas

You can set projects to run only on replicas and globally

14.6.1 Docker Stack
# Docker compose stand-alone deployment project
docker-compose up -d wordpress.yml
# Docker Stack deployment - cluster deployment project
docker stack deploy wordpress.yml

# docker-compose.yml uses stack
version: '3.4'
    image: mongo
    restart: always
      - mongo_network
        condition: on-failure
      replicas: 2
    image: mongo-express
    restart: always
      - mongo_network
      - target: 8081
        published: 80
        protocol: tcp
        mode: ingress
        condition: on-failure
      replicas: 1
    external: true
14.6.2 Docker Secret

Configure security, configure password, certificate, etc

14.6.3 Docker Config

Unified configuration of containers

Learning reference Videos:

Learning reference Videos:

Tags: Java Linux Docker

Posted on Sat, 30 Oct 2021 19:45:07 -0400 by Domhnall