Container development tool -- Introduction to docker

docker is a linux based deployment project. If you are not familiar with linux commands, you can get started quickly through this article: linux quick start

1, Docker introduction

1.1 background

In actual business development, we will encounter multiple environments: development environment, test environment and production environment (i.e. deployment online). If we use jdk8 for redevelopment environment and jdk11 for other environments, there will be a lot of trouble in deployment. Therefore, we consider packaging the development environment, handing it to test, and then handing it to production, so container development is born.

1.2 concept

  • Docker is an open source application container engine
  • Docker allows developers to package their applications and dependency packages into a lightweight and portable container, and then publish them to any popular Linux machine.
  • Containers are completely isolated from each other using the sandbox mechanism

1.3 installation

# 1. Update package Yum to the latest version
yum update
# 2. Install the required packages. Yum util provides the yum config manager function. The other two are dependent on the devicemapper driver
yum install -y yum-utils device-mapper-persistent-data lvm2
# 3. Set yum source
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 4. Install docker and press y when the input interface appears
yum install -y docker-ce
# 5. Check the docker version and verify whether the verification is successful
docker -v

After installation, it can be set to start automatically, otherwise it must be started manually, otherwise the following commands will not work.

  • Mirror accelerator
  • Alicloud image access address , after logging in, select the image accelerator in the left menu to see your exclusive address:
  • Add the following content at the end of the / etc/docker/daemon.json file (if not, create it yourself):
{
"registry-mirrors": ["https://Own ID.mirror.aliyuncs.com "]
}

1.4 architecture

Briefly introduce the above concepts:

  • client: equivalent to linux command line interface
  • hosts: equivalent to virtual machine
  • Image: a file system, just like a linux image when installing a virtual machine, with it, you can enter the docker command in the virtual machine.
  • Container: a container can only be created by mirroring. It can be understood that an image is a class and a container is an object.
  • Repository: similar to github, a repository for storing code.

2, docker command

2.1 process related commands (understand)

# Start docker service
systemctl start docker

# Stop docker service
systemctl stop docker

# Restart docker service
systemctl restart docker

# View docker service status
systemctl status docker

# Set to enable the self starting docker service
systemctl enable docker

2.2 image related commands (important)

The mirrored version can be downloaded to View on official website

# View mirror
docker images

# Search image
docker images Image name/image id
# Pull image
	# Method 1: no version is added after the name, and the latest version image is pulled
	docker pull Image name
	# Method 2: add a version after the name, and pull the specified version image
	docker pull redis:5.0
	
# delete mirror
docker rmi image id # Deletes the specified mirror
docker rmi `docker images -q` # Delete all images, ` ` is the symbol above the tab key

2.3 vessel related commands (important)

# View container
docker ps # View running containers
docker ps -a # View all containers

# Create and start the container
docker run parameter

Parameter Description:

-i: Keep the container running
-t: Assign a pseudo input terminal (the container can be operated through the command line after creation), which is usually used in combination with - i, - it: interactive container
-d: Run the container in the daemon mode (it does not enter the container after creation, and you need to enter it with the docker exec instruction). It is usually used in combination with - i, - id: the daemon container
– Name: name the container

docker run -it --name=c1 centos:7 /bin/bash #Create interactive container
docker run -id --name=c2 centos:7 #Create a daemon container

Interactive container. The container will close automatically after exit, and the daemon container will execute in the background

# Enter container
docker exec -it Container name /bin/bash

# Stop container
docker stop Container name

# Start container
docker start Container name

# Delete container if container is stopped
docker rm Container name/container id

# View container information
docker inspect Container name

3, Data volume

3.1 concept and function of data volume

Let's learn about data volumes with the following questions:

  1. Does the data still exist after the container is deleted?
  2. Can external machines access containers directly?
  3. How do containers exchange data?

3.1.1 data volume concept

  • A data volume is a directory or file of * * host (i.e. virtual machine) * *
  • When the container directory and the data volume directory are bound, the modifications of both sides will be synchronized
  • A data volume can be mounted by multiple containers at the same time
  • A container can mount multiple data volumes

3.1.2 function of data volume

  • Container data persistence (somewhat similar to redis)
  • Indirect communication between external machine and container
  • Data exchange between containers

3.2 configuring data volumes

When creating a container, use the - v parameter to set the data volume

  • be careful
  1. Container directory must be an absolute path
  2. If the directory does not exist, it is created automatically
# Mount the files in the centos7 container with the name c1 and the path / root/data_container to the host / root/data
# The two paths are separated by colons. Write the host path first and then the container path
# The host path can be simplified to ~ / data, but the container path must be / root/data_container, that is, the absolute path
docker run -it --name=c1 -v /root/data:/root/data_container centos:7 /bin/bash

Verify that the data is persistent

  1. Write a file to the container / root / data_containere
  2. After deleting the container, check whether the file still exists
  3. Create the container again to see if the file exists in the container
# A container mounts multiple data volumes, that is, two paths
docker run -it --name=c2 \ # \Indicates a line break, which must be preceded by a space
-v ~/data2:/root/data2 \
-v ~/data3:/root/data3 \

# Two containers mount a data volume
docker run -it --name=c3 -v /root/data:/root/data_container centos:7 /bin/bash
docker run -it --name=c4 -v /root/data:/root/data_container centos:7 /bin/bash

3.3 data volume container

Concept: create a container for storing data volumes
Function: multi container data exchange

  1. Create boot c3 data volume container
# The host directory is not specified. A host directory is generated by default
# View the generated Host Directory through docker inspect c3
docker run -it --name=c3 -v /volume centos:7 /bin/bash
  1. Create the boot c1 c2 container and set the data volume with the – - volumes from parameter
docker run -it --name=c1 --volumes-from c3 centos:7 /bin/bash
docker run -it --name=c2 --volumes-from c3 centos:7 /bin/bash

Use the c3 data volume container to create C1 and C2. At this time, even if c3 is closed, the interaction between C1 and C2 will not be affected

4, docker application deployment

4.1 mysql deployment

  • The external machine cannot directly access the container, but can access the host machine
  • When the network service in the container needs to be accessed by an external machine, the port providing the service in the container can be mapped to the port of the host. This operation is called port mapping.


The steps are as follows:

# 1. Search for images
docker search mysql

# 2. Pull the image
docker pull mysql:5.6

# 3. Create a container and set port mapping and directory mapping
 # Create a mysql directory in the host / root directory
 mkdidr ~/mysql
 cd ~/mysql

 docker run -id \
 -p 3307:3306 \
 --name=c_mysql \
 -v $PWD/conf:/etc/mysql/conf.d \
 -v $PWD/logs:/logs \
 -v $PWD/data:/var/lib/mysql \
 -e MYSQL_ROOT_PASSWORD=123456 \
 mysql:5.6 

Parameter Description:

  • -p 3307:3306: map the 3306 port of the container to the 3307 port of the host.
  • -v$PWD/conf:/etc/mysql/conf.d: mount the conf in the current directory of the host to / etc/mysqlconf.d of the container
  • -e MYSQL_ROOT_PASSWORD=123456: initialize the password of root user.
# 4. Enter the container and operate mysql
docker exec -it c_mysql /bin/bash
mysql -uroot -p123456
show databases;
create database db1;

# 5. Use navicat to access the test

4.2 tomcat deployment

# 1. Search for images
docker search tomcat

# 2. Pull the image
docker pull tomcat

# 3. Create a container and set port mapping and directory mapping
 # Create a tomcat directory under the / root directory to store tomcat data information
 mkdir ~/tomcat
 cd ~/tomcat
 docker run -id --name=c_tomcat \
 -p 8080:8080 \
 -v $PWD:/usr/local/tomcat/webapps \
 tomcat

# 4. Write an html file in the current directory and use the browser to access the test

4.3 Nginx deployment

# 1. Search for images
docker search nginx

# 2. Pull the image
docker pull nginx

# 3. Create a container and set port mapping and directory mapping
 # Create nginx directory under / root directory to store nginx data information
 mkdir ~/nginx
 cd ~/nginx
 mkdir conf
 cd conf
 # Create nginx.conf file under ~ / nginx/conf / and paste the following contents
 vim nginx.conf

The documents are as follows

user nginx;
worker_processes 1;

error_log   /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
	 include       /etc/nginx/mime.types;
	 default_type  application/octet-stream;
	 log_format  main 		'$remote_addr - $remote_user [$time_local] "$request" '
							'$status $body_bytes_sent "$http_referer" '
							'"$http_user_agent" "$http_x_forwarded_for"';
	access_log   /var/log/nginx/access.log   main;
	
	sendfile  on;
	#tcp_nopush on;
	keepalive_timeout  65;
	
	#gzip on;
	include /etc/nginx/conf.d/*.conf;
}

After editing the file, continue to create the container

docker run -id --name=c_nginx \
-p 80:80 \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
-v $PWD/logs:/var/log/nginx \
-v $PWD/html:/usr/share/nginx/html \
nginx

# 4. Access nginx by external machine

4.4 Redis deployment

# 1. Search for images
docker search redis

# 2. Pull the image
docker pull redis:5.0

# 3. Create container and set port mapping
docker run -id --name=c_reids -p 6379:6379 redis:5.0

# 4. To connect to redis using an external machine, you need to install redis in linux
./redis-cli.exe -h 192.168.220.12 -p 6379
keys *
set name lxs
get name

5, Dockerfile

5.1 docker image principle

reflection:

  1. How many g does centos image need, while the centos in docker only needs 200MB?
  2. The tomcat installation package is only more than 70 MB, while the docker does have 600 MB?

First, let's introduce the liunx file system

  • Linux file system consists of bootfs and rootfs
  1. bootfs: contains bootloader and kernel
  2. rootfs: root file system, which contains standard directories and files such as / dev, / proc, / bin, / etc in a typical Linux system
  3. bootfs are basically the same for different linux distributions, but rootfs is different, such as ubuntu, centos, etc

docker image principle

  • Docker images are superimposed by special file systems
  • The bottom end is bootfs and uses the bootfs of the host
  • The second layer is the root file system rootfs, which is called base image
  • Then you can overlay other image files up
  • The unified file system technology can integrate different layers into a file system, providing a unified perspective for these layers, which hides the existence of multiple layers. From the user's point of view, there is only one file system.
  • One image can be placed on top of another. The image below is called the parent image, and the bottom image becomes the base image.
  • When the container is started from an image, Docker will load a read-write file system as the container at the top level

Answers to questions:

  • Q1: what is the essence of docker image
    A1: hierarchical file system
  • Q2: how many g does centos image need, while the centos in docker only needs 200MB?
    A2:Centos' iso image file contains bootfs and rootfs, while docker's centos image reuses the bootfs of the operating system, only rootfs and other image layers
  • Q3: the Tomcat installation package is only more than 70 MB, while the docker does have 600 MB?
    A3: because the images in docker are hierarchical, although tomcat only has more than 70 MB, it needs to rely on the parent image and the basic image. The size of all exposed tomcat images is more than 600 MB

5.2 image production

5.2.1 container to mirror image (understand)

The contents of a data volume cannot be packaged into a mirror

# Basic instruction
ocker commit container id Image name:Version number
docker save -o Compressed file name image name:Version number
docker load –i Compressed file name
# Specific operation
# Create tomcat container
docker run -id --name=c_tomcat \
-p 8080:8080 \
-v $PWD:/usr/local/tomcat/webapps \
tomcat

# Enter tomcat container
docker exec -it c_tomcat /bin/bash

#Create a.txt b.txt
cd ~
touch a.txt b.txt

#Container to mirror
docker commit 28b8d4dc9744 lxs_tomcat:1.0  # The name and version can be customized

#Compressed image
docker save -o lxs_tomcat.tar lxs_tomcat:1.0  # The image name can be customized

#Delete the original image
docker rmi lxs_tomcat:1.0

#Load image from compressed file
docker load -i lxs_tomcat.tar

#Generating container
docker run -it --name=new_tomcat lxs_tomcat:1.0 /bin/bash

#Enter view content
docker exec -it c_tomcat /bin/bash

#You can see that a.txt and b.txt exist, but webapps/test does not exist

5.2.2 dockerfile (important)

  1. concept
  • Docker "le" is a text file containing multiple instructions
  • Each instruction builds a layer, based on the basic image, and finally builds a new image
  • For developers: it can provide a completely consistent development environment for the development team
  • For testers: you can directly take the image built during development or build a new image through the Docker "le file
  • For operation and maintenance personnel: seamless migration of applications can be realized during deployment
  1. Publish springboot project
  • Print the springboot project into a jar package in advance
  • Enter the host creation directory / root/app
  • Import the springboot.jar package (if it is another name, change it to this name through the mv instruction)
  • Start editing the dockerfile file and name it springboot_dockerfile
FROM java:8
MAINTAINER lxs <lxs@163.com>
ADD springboot.jar app.jar
CMD ["java","-jar","app.jar"]

Parameter description
FROM: Specifies the parent image
MAINTAINER: author information (optional)
ADD: ADD the file (here, put the jar file in the host into the container and name it app.jar)
CMD: container startup command (run jar file)

Creating images through dockerfile

docker build -f ./springboot_dockerfile -t app .

Parameter description
-f: Through which dockerfile, followed by the dockerfile path
-t: Create mirror
app: image name (customizable)
.: addressing function, many instructions in linux system will take a dot

# Create container
docker run -id -p 9000:8080 app

# The external machine accesses the host 9000 port and enters the corresponding path
# Path view
docker logs container id
  1. summary

dockerfile can package its own project jar package, jdk version and other information together and give it to the test and operation and maintenance personnel.

6, Service arrangement (difficulty)

6.1 concept

The application system of microservice architecture generally contains several microservices, and each microservice will generally deploy multiple instances. If each microservice needs to be started and stopped manually, the maintenance workload will be very heavy. Let's take a look at our daily work:

  • Build image from Docker "Le" or pull image from dockerhub
  • To create multiple container s
  • To manage these container s (start stop delete)
    The above work can be greatly simplified through service orchestration

6.2 Docker Compose

Docker Compose is a tool for arranging multi container distributed deployment. It provides commands to centrally manage the complete development cycle of container applications, including service construction, start and stop. Use steps:

  1. Use Docker "le" to define the running environment image
  2. Use docker-compose.yml to define the services that make up the application
  3. Run docker compose up to start the application

6.2.1 installing docker compose

# Compose now fully supports Linux, Mac OS and Windows. We need to install Docker before installing compose. Now let's take
 The compiled binary package is installed in Linux In the system.
curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-
`uname -m` -o /usr/local/bin/docker-compose
# Set file executable permissions
chmod +x /usr/local/bin/docker-compose
# View version information
docker-compose -version

6.2.2 uninstall docker compose

# For binary package installation, delete the binary file
rm /usr/local/bin/docker-compose

6.3 editing nginx+springboot

  1. Create docker compose directory
mkdir ~/docker-compose
cd ~/docker-compose
  1. Write the docker-compose.yml file
version: '3'
services:
 nginx:
  image: nginx
  ports:
   - 80:80
  links:
   - app
  volumes:
   - ./nginx/conf.d:/etc/nginx/conf.d
 app:
  image: app
  expose:
   - "8080"
  1. Create the. / nginx/conf.d directory
mkdir -p ./nginx/conf.d
  1. Write the app.conf file in the. / nginx/conf.d directory
server {
	listen 80;
	access_log off;
	
	location / {
		proxy_pass http://app:8080/hello;
	}
}
  1. Use docker compose to start the container under the ~ / docker compose directory
docker-compose up -d # -d indicates that the daemon mode has been started
  1. Test access
http://Host ip address / hello

7, docker private warehouse (understand)

7.1 background

Like github, warehouses are public and private. When we don't want our images to be put on the public network, we can use private warehouses, but this is rare.

7.2 construction of private warehouse

# 1. Pull private warehouse image
docker pull registry
# 2. Start private warehouse container
docker run -id --name=registry -p 5000:5000 registry
# 3. Open the browser and enter the address http: / / private warehouse server ip:5000/v2/_catalog, see {"repositories": []} indicates private warehouses
 The library was built successfully
# 4. Modify daemon.json
vim /etc/docker/daemon.json
# Add a key to the above file, save and exit. This step is used to make docker trust the private warehouse address; Note: change the ip address of the private warehouse server to self
 Private warehouse server ip
{"insecure-registries":["Host computer ip address:5000"]}
# 5. Restart docker service
systemctl restart docker
docker start registry

7.3 upload the image to the private warehouse

# 1. Image marked as private warehouse
docker tag centos:7 192.168.220.12:5000/centos:7
# 2. Upload image of tag
docker push 192.168.220.12:5000/centos:7

7.4 pulling images from private warehouses

#Pull image
docker pull 192.168.220.12:5000/centos:7

Tags: Linux Docker Container

Posted on Wed, 20 Oct 2021 14:30:24 -0400 by brbsta