Introduction and use of container Docker

Why does Docker appear?

One product: development – online two sets of environments! Application environment, application configuration!

Tradition: development jar, operation and maintenance to do!

Now: development, packaging, deployment and launch, and a set of processes are completed!

java – apk – release (app store) – Zhang San uses apl – available after installation!

java – jar (environment) – package the project with the environment (image) – (Docker warehouse: store) – download the published image – just run it directly!

Docker solution:

Docker can make full use of the server through the isolation mechanism!

Isolation mechanism: multiple applications are isolated from each other, unrelated and will not affect each other.

Comparison between VM (virtual machine) and Docker:

VM: linux centos native image (one computer!) isolation. You need to start multiple virtual machines, several G!

Docker: isolation. The image (the core environment 4m + jdk + mysql) is very compact. Just run the image! Small! Several M KB

Docker introduction:

docker is developed based on GO language! Open source project!

Official website:

Document address: docker document super detailed

Warehouse address: (similar to github)

What can Docker do?

Disadvantages of virtual machine technology:

  1. It takes up a lot of resources
  2. Redundant steps
  3. Slow start

Containerization Technology:

Containerization technology is not a complete operating system for simulation.

Compare Docker and virtual machine technologies:

1. The traditional virtual machine virtualizes a piece of hardware, runs a complete operating system, and then installs and runs software on this system
2. The applications in the container run directly on the content of the host computer. The container does not have its own kernel or virtual hardware, so it is more portable
3. Each container is isolated from each other. Each container has its own file system and does not affect each other

Devops (development, operation and maintenance):

Faster delivery and deployment of applications

Traditional: a pile of help documents, installer

Docker: packaging image release test, one click operation

More convenient upgrade and capacity expansion

After using Docker, deploying applications is like building blocks!

Simpler system operation and maintenance

After containerization, the development and test environments are highly consistent.

More efficient utilization of computing resources

Docker is kernel level virtualization, which can run many container instances on a physical machine! Server performance can be squeezed to the extreme!

Docker installation:

Basic composition of Docker:

* * image: * * docker image is like a template, which can be used to create container services. Tomcat image - "run -" tomcat01 container (providing server). Multiple containers can be created through this image (the final service operation or project operation is in the container).

* * container: * * Docker uses container technology to run one or a group of applications independently and create them through image.

Start, stop, delete, basic command!

At present, this container can be understood as a simple linux system

* * Repository: * * the repository is where images are stored! The warehouse is divided into shared warehouse and private warehouse!

Docker hub (foreign by default) and Alibaba cloud... All have container servers (image accelerator needs to be configured to download!)

Docker installation:

  1. Uninstall old version:

    yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
  2. Required packages:

    yum install -y yum-utils
  3. Set up a mirror warehouse:

    The default is foreign:

    yum-config-manager \
        --add-repo \

    Domestic address:

    yum-config-manager \
        --add-repo \

    #Update yum package index

    yum makecache fast

  4. Install docker CE community ee Enterprise Edition

    yum install docker-ce docker-ce-cli
    yum install docker-ce
  5. Start docker

    # Start docker
    # systemctl start docker
    # Set startup
    # sudo systemctl enable docker
    # Stop docker
    # sudo systemctl stop docker
  6. Use docker version to check whether the installation is successful

  7. Verify that the Docker engine is installed correctly by running the image (Hello World)

    docker run hello-world
  8. Check out the downloaded Hello world image

    docker images
  9. Learn about uninstalling docker

    1. Unload dependency

      yum remove docker-ce docker-ce-cli
    2. Delete resource

      rm -rf /var/lib/docker
      rm -rf /var/lib/containerd
    3. Configure alicloud image acceleration (omitted)

Underlying principle of Docker:

How does Docker work?

Docker is a client server system. The daemon of docker runs on the host. Access from the client through Socket! DockerServer will execute this command after receiving the docker client instruction!

Why is Docker faster than VM?

  1. Docker has fewer abstraction layers than VM.

  2. docker uses the kernel of the host, and the Guest OS is required by the VM.

    Therefore, when creating a new container, docker does not need to reload an operating system kernel like a virtual machine to avoid booting. When loading the virtual machine, the Guest OS is at the minute level! Docker uses the host's operating system, omitting this process, second level!

Common commands of Docker

Help command:

docker version #Display docker version information
docker info #Displays the system information of docker, including the number of images and containers
docker command --help #Display help commands, such as docker info --help

Help document address:

Mirror command:

docker images view images on all local hosts

[the external link picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-q2oNDgjg-1631491484164)(C:\Users271\Desktop\MD format notes \ picture storage \ image-202103131518790. PNG)]

# explain
REPOSTORY Mirrored warehouse source
TAG	Mirrored label
IMAGE ID mirrored  id
CREATED Creation time of the image
SIZE Mirror size
# Optional 
 -a , --all # List all mirrors
 -q , --quiet # Only the id of the mirror is displayed

docker search search image

#docker search mysql
#docker search mysql --filter=stars=5000  #Indicates applications with more than 5000 searches

docker pull Download Image

# Download Image docker pull image name [: tag]
# docker pull mysql   #If you do not write tag (version number), the default is latest (latest version)
#Docker download is a layered download, which is the core federated file system of docker imge
# docker pull mysql:5.7  #hiding version download
# docker pull mysql is equivalent to docker pull

docker rmi deletes the image!

# docker rmi -f image id  # Deletes the specified mirror
# docker rmi -f image id image id image id # Delete multiple mirrors
# docker rmi -f $(docker images -aq)  # Delete all mirrors

Container command:

Note: only when we have an image can we create a container. For linux, download a centos image to test and learn

docker pull centos

Create a new container and start

docker run [Optional parameters] image
#Parameter description
--name='Name' Container name, for example: tomcat01 tomcat02 ,Used to distinguish containers
-d			  Run in background mode
-it			  Run in interactive mode and enter the container to view the content
-p			  Specifies the port of the container -p 8080:8080
-P			  Randomly assigned port
# Test, start and enter the container
docker run -it centos /bin/bash	#The inner container is not associated with the outer container
# Return the host from the container

List all running containers:

docker ps  # Lists currently running containers
docker ps -a #List running + historical containers
docker ps -n=number  #Displays recently created containers
docker ps -aq #Displays only the number of the container

Exit container:

exit # Direct container stop and exit
Ctrl + P + Q #The container does not stop exiting

Delete container:

docker rm container id	#Delete the specified container. You cannot delete the running container. Force to delete rm -f
docker rm -f $(docker ps -aq) 	#Delete all containers
docker ps -a -q|xargs docker rm #Delete all containers

Start and stop container operations:

docker start container id # Start container
docker restart container id # Restart container
docker stop container id # Stop the currently running container
docker kill container id # Force stop of current container

Other common commands:

Background startup container:

# docker run -d image name
# Problem: docker ps found centos stopped
# A common problem is that the docker container must have a foreground process to run in the background. If docker finds no application, it will stop automatically
# nginx, when the container starts and finds that it does not provide services, it will stop immediately, that is, there is no program

View log:

docker logs -f -t --tail container id,No log
# Write shell script to start cycle  
# docker run -d centos /bin/sh -c "while true;do echo Administrator;sleep 1;done"
# Show log
  -tf  # Show log
  --tail number #Display the number of logs, number
  # docker logs -tf --tail 10 35a813b9d2bc 

To view the process information in the container:

# Command docker top container id
# docker top 33996f0cbde3

To view the metadata of a mirror:

# docker inspect container id
docker inspect 33996f0cbde3

Enter the currently running container:

# We usually run the container in the background mode. We need to enter the container and modify some configurations
# Command one
docker exec -it container id bashshell
# test 
# docker exec -it e1fec3a9b3f7 /bin/bash
# ps -ef # Show all processes

# Command two
docker attach container id
# test
# docker attach 33996f0cbde3

# docker exec  # After entering the container, open a new terminal, which can be operated inside (commonly used)
# docker attach  # Entering the terminal where the container is executing will not start a new process

Copy files from the container to the host:

docker cp container id:Host path for in container destination
# test
# Enter the inside of the container
# docker exec -it e1fec3a9b3f7 /bin/bash
# Create file in container 
# touch
# Copy to host
# docker cp /home

Docker project environment installation:

Deploy Nginx:

# 1. Search image docker search nginx
# 2. Download Image docker pull nginx
# 3. Run test docker images
#	 -d background operation
#	 --Name name the container
# 	 -p host port, container internal port
#	 docker run -d --name nginx01 -p 3344:80 nginx
# 	 docker ps
# 	 Curl localhost: 3344 (you can also test on the host on the web page http: / / host IP:3344 /)
# Enter container docker exec -it nginx01 /bin/bash
# Where is nginx (find the location of nginx)

Deploy Tomcat:

# Official use
# docker run -it --rm tomcat:9.0
# We started in the background before. After stopping the container, the container can still be found. Official orders are generally used to test and delete as soon as they are used up
# Download and restart
# docker pull tomcat
# Start operation
# docker run -d -p 3355:8080 --name tomcat01 tomcat
# After the test access has no problem
# Enter container
# docker exec -it tomcat01 /bin/bash
# Copy the webapps.list folder file into webapps
# cp -r webapps.dist/* webapps
# Problems found: 1. There are few linux commands; 2. No webapps
# Reason: alicloud image reason: the default is the smallest image, and all unnecessary images are eliminated. Ensure a minimum operational environment!

Deploy es (full text search) + Kibana:

Elastic search is short for es, which is a highly extended and open-source full-text retrieval and analysis engine. It can quickly store, search and analyze massive data in quasi real time.

# es exposes many ports
# es consumes a lot of memory
# es data generally needs to be placed in the security directory! mount 
# --net somenetwork?  network configuration

# Start elasticsearch
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.6.2
# When linux is started, docker stats gets stuck. Check the cpu status 
# Ctrl+C exit to view the CPU status, because the status is refreshed at any time
# es is very memory consuming. 1.x G is generally 2G for one core of the system!

# Close it quickly, increase the memory limit, modify the configuration file and -e environment configuration
docker run -d --name elasticsearch02 -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" elasticsearch:7.6.2


portainer(Use this first)
# docker run -d -p 8088:9000 --restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true portainer/portainer

Rancher(CI/CD reuse)

What is a portal?

Docker graphical interface management tool! Provide a background panel for us to operate!

# docker run -d -p 8088:9000 --restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true portainer/portainer

# Access test: http://ip:8088/
# Select local panel
# I don't use it at ordinary times. Test and play

Docker image explanation:

What is mirroring?

Image is a lightweight and executable independent software package, which is used to package the software running environment and the software developed based on the running environment. It contains all the contents required to run a software, including code, runtime, library, environment variables and configuration files.

Docker image loading principle:

# Unionfs (Federated file system)
UnionFS(Federated file system): Union file system(UnionFS)It is a layered, lightweight and high-performance file system. It supports the superposition of file system modifications from one submission to another. At the same time, different directories can be mounted under the same virtual file system(unite several directories into a single virtual filesystem). Union The file system is Docker The basis of mirroring. Images can be inherited through hierarchy, based on the underlying image(No parent mirror),Various specific application images can be made.
# Features: multiple file systems are loaded at the same time, but from the outside, only one file system can be seen. Joint loading will load all layers of files 		  Piece systems are superimposed so that the final file system will contain all the underlying files and directories.
# Docker image loading principle
docker The image of is actually composed of a layer by layer file system, and this level of files reflects UnionFS

bootfs(boot file system)Mainly contains bootloader(loader ) and kernel(kernel),bootloader Mainly boot loading kernel,Linux Will load at startup bootfs File system, in Docker The bottom layer of the image is bootfs. This layer is similar to our typical Linux/Unix The system is the same, including boot Loader and kernel. When boot After loading, the whole kernel is in memory. At this time, the right to use the memory has been granted by bootfs Transfer to the kernel, and the system will also be uninstalled bootfs. 

rootfs(root file system),stay bootfs above. What is included is typical Linux In the system /dev,/proc,/bin,/etc And other standard directories and files. rootfs Various operating system distributions, such as Ubuntu,Centos wait.
# docker
 For a streamlined OS,rootfs It can be very small. It only needs to include the most basic commands, tools and programs, because the bottom layer is used directly Host of kernel,You only need to provide rootfs That's it. This shows that for different linux Distribution, bootfs Basically the same, rootfs There will be differences, so different distributions can be shared bootfs. 
# So the virtual machine is at the minute level and the container is at the second level!

Layered understanding:

# Tiered mirroring
# Let's download an image and observe the log output of the download. We can see that it is downloading layer by layer!
# Docker image has the highest advantage of this hierarchical structure: resource sharing
# For example, if multiple images are built from the same base image, the host only needs to keep one base image on the disk, and only one base image needs to be loaded in memory, so that it can serve all containers, and each layer of the image can be shared
# To view the image hierarchy, you can use the docker image inspect command!
# For example: docker image inspect redis: tag (version name)

# understand:
# All Docker images start from a basic image layer. When modifying or adding new content, a new image layer will be created above the current image layer.
# For example, if you create a new image based on Ubuntu Linux 16.04, this is the first layer of the new image; If a Python package is added to the image, a second image layer will be created on top of the basic image; If you continue to add a security patch, a third mirror layer will be created.	
# characteristic
# Docker images are read-only. When the container is started, a new writable layer is loaded to the top of the image!
# This layer is what we usually call the container layer. What is under the container is called the mirror layer!
# Container layer: all your operations are based on the container layer

commit image:

# The docker commit submission container becomes a new copy
# The command is similar to git
# docker commit -m = "submitted description" - a = "author" container id target image name: [tag]
# Actual test
# Start a default tomcat, but the default Tomcat has no webapps application (no access page is started) (image reason)
# Copy a basic file and commit the image
# docker commit -a="tyx" -m="add webapps app" 2cb3afe58e1f tomcat02:1.0
# You can directly use the modified image in the future!

Container data volume:

What is a container data volume?

Requirement: data can be persistent! There can be a data sharing technology between containers!

The data generated by the Docker container is synchronized to the local! This is volume technology! Directory mount, mount the directory in our container to Linux!

To sum up: container persistence and synchronization! Data can be shared between containers!

Using data volumes:

# Method 1: directly use the command to mount - v
# docker run -it -v Host Directory: the directory in the container
# test
# docker run -it -v /home/ceshi:/home centos /bin/bash
# After startup, we can view the mount information through the docker inspect container id
# View the mount information in "Mounts".
# File contents can also be mounted synchronously!

# Stop the container - modify the file on the host - start the container - the data in the container is still synchronized
# docker stop container id -- stop container
# docker start container id 	 -- Start container
# docker attach container id -- enter container

# Benefits: modifying the file configuration can be modified on the host and automatically synchronized in the container!

Actual combat: Mysql:

# Data persistence in Mysql
# Get image
# docker pull mysql:5.7
# To run the container, you need to mount the data! #To install and start mysql, you need to configure the password!
# Official test:
# docker run -name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret -pw -d         mysql:tag
# Native test:
# -d background operation
# -p port mapping
# -v volume mount
# -e environment configuration
# --Name container name
# docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

# After successful startup, use SQLyog connection test locally. The user is root and the password is the set password
# SQLyog connects to 3310 -- 3310 of the server and 3306 mapping in the container
# Create a database in the native test and check whether the mapped path generates a database!

# docker rm -f container name -- delete container
# The test results show that even if the container is deleted, the data is retained!
# In this way, data persistence is realized!

Named and anonymous mount:

# Anonymous mount
-v In container path
docker run -d -P --name nginx-1 -v /etc/nginx nginx
# View all volumes
docker volume ls
# If only the path inside the container is written in - v, and the path outside the container is not written, it is an anonymous mount!

# Named mount
docker run -d -P --name nginx02 -v tyx-nginx:/etc/nginx nginx
# Via -v volume name: path within container
# Viewing volumes  
# docker volume inspect volume name
# All the volumes in the docker container are in / var / lib / docker / volumes / xxx if no directory is specified/_ data
# We can easily find a volume through named mount. In most cases, named mount is used
# How to determine whether it is a named mount or an anonymous mount, or a specified path mount!
-v In container path		#Anonymous mount
-v Volume name:In container path	   #Named mount
-v /Host path:/In container path #Specified path mount
# expand
# Changing read and write permissions through - v container inner diameter path: ro/rw
ro readonly 	#read-only
rw readwrite	#Readable and writable
# Once the container permission is set, the container has restrictions on the content we mount
# docker run -d -P --name nginx02 -v tyx-nginx:/etc/nginx:ro nginx
# docker run -d -P --name nginx02 -v tyx-nginx:/etc/nginx:rw nginx

# Only when you see ro, it means that this path can only be operated through the host, and cannot be operated inside the container!

First knowledge of Dockerfile:

# Dockerfile is the build file used to build the docker image!
# Through this script, you can generate an image. The image is layer by layer. The script commands one by one, and each command is a layer!
# Create a Dockerfile with a random name and create a Dockerfile
# Content directive (uppercase) parameter in file
FROM centos

VOLUME ["volume01","volume02"]

CMD echo "----end----"
CMD /bin/bash
# Each command here is a layer of image! Note the space +    
docker build -f /home/docker-test-volume/dockerfile1 -t tyx/centos:1.0 .

# Start the self generated image
# docker run -it 935391a2bb00 /bin/bash

# Assuming that no volume is mounted when the image is built, manually mount -v volume name: path in container!

Data volume container:

# Data sharing with multiple mysql
docker run -d -p 3310:3306 -v /etc/mysql/conf.d -v /var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

docker run -d -p 3311:3306 -e MYSQL_ROOT_PASSWORD=123456 --name mysql02 --volumes-from mysql01 mysql:5.7

# At this time, you can synchronize the two container databases!
# Conclusion:
# For the transfer of configuration information between containers, the life cycle of data volume containers continues until no containers are used.


DockerFile introduction:

dockerfile is the file used to build the docker image! Command parameter script!

Construction steps:

  1. Write a dockerfile file

  2. docker build builds into an image

  3. docker run run image

  4. docker push releases images (DockerHub, alicloud image warehouse!)

    Many official images are basic packages, so we usually build our own images!

DockerFile construction process:


Each reserved keyword (instruction) must be capitalized;

Instructions are executed from top to bottom;

(#) indicates a note;

Each instruction will create and submit a new mirror layer!

Dockerfile is development oriented. To publish a project and make an image, you need to write a dockerfile file.

Docker image has gradually become the standard for enterprise delivery!

DockerFile: build file, which defines all steps and source code;

DockerImages: build the generated image through DockerFile, and finally release and run the product!

Docker container: a container is a server with images running.

DockerFile instructions:

FROM		# Basic mirror image, everything starts from here
MAINTAINER	# Who wrote the image, name + email
RUN			# Commands to run during image construction
ADD			# Step: tomcat image, this tomcat compressed package! Add content
WORKDIR		# Mirrored working directory
VOLUME		# Mounted directory
EXPOSE		# Reserved port configuration
CMD			# Specify the command to run when the container starts. Only the last one will take effect and can be replaced
ENTRYPOINT	# Specify the command to run when the container starts, and you can append the command
ONBUILD		# When an inherited DockerFile is built, the ONBUILD instruction will be run
COPY 		# Similar to ADD, copy our files to the image
ENV			# Set environment variables during construction!

Actual test:

# Create your own centos
# RUN yum -y install vim
# RUN yum -y install net-tools
# The most basic centos system does not have basic commands such as vim ifconfig, so you need to add them yourself
1. Create a dockerfile file  vim mydockerfile-centos

2. Write startup related information(Write configuration file)	
FROM centos
MAINTAINER tangyuxiang<>

ENV MYPATH /usr/local

RUN yum -y install vim
RUN yum -y install net-tools


CMD echo "----end----"
CMD /bin/bash

3. Build an image from this file
# Command docker build -f dockerfile file path - t image name: [tag]
# docker build -f mydockerfile-centos -t mycentos:0.1 .

4. test run 
# docker run -it mycentos:0.1 (you need to add a version number, otherwise you will find the latest version)

# docker history image id     # You can see the run command of the image

Difference between CMD and ENTRYPOINT:

CMD			# Specify the command to run when the container starts. Only the last one will be generated and can be replaced
ENTRYPOINT	# Specify the command to run when the container starts, and you can append the command
# Test CMD

1. establish dockerfile file
# vim dockerfile-cmd-test
FROM centos
CMD ["ls","-a"]

2. Build mirror
# docker build -f dockerfile-cmd-test -t cmdtest .

3. Start test
# docker run image name
# docker run cmdtest

# Result: the cmd command enters the container and is executed immediately! And you can't append commands when starting!

1. establish dockerfile file
# vim dockerfile-entrypoint-test
FROM centos
ENTRYPOINT ["ls","-a"]

2. Build mirror
# docker build -f dockerfile-entrypoint-test -t entrypointtest .

3. Start test
# docker run entrypointtest
# docker run entrypointtest -l

# Result: the entrypoint command enters the container and is executed immediately! And you can add commands when starting!

Actual combat: Tomcat image

1. Prepare mirror file tomcat Compressed package, jdk Compressed package!
# For example: home/build/tomcat / there are tomcat and jdk installation packages below

2. to write dockerfile file, Official naming Dockerfile, build Will automatically find this file, You don't have to	 want -f Specified!

FROM centos
MAINTAINER tangyuxiang<>

COPY readme.txt /usr/local/readme.txt

ADD jdk-8u281-linux-x64.tar.gz /usr/local/
ADD apache-tomcat-9.0.44.tar.gz /usr/local/

RUN yum -y install vim

ENV MYPATH /usr/local

ENV JAVA_HOME /usr/local/jdk-8u281-linux-x64
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.44
ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.44


CMD /usr/local/apache-tomcat-9.0.44/bin/ && tail -F /usr/local/apache-tomcat-9.0.44/bin/logs/catalina.out

3. Build mirror (Because it's called: Dockerfile , Therefore, you do not need to specify a file name)
# docker build -t diytomcat .

4. Start mirroring 
docker run -d -p 9090:8080 --name tangyuxiangtomcat -v /home/tangyuxiang/build/tomcat/tomcatlogs/:/usr/local/apache-tomcat-9.0.44/logs -v /home/tangyuxiang/build/tomcat/test:/usr/local/apache-tomcat-9.0.44/webapps/test diytomcat

5. Access test
# Enter container
# docker exec -it container id /bin/bash
# VM access test 
# curl localhost:9090
# Native test http://ip:port/
# Test result: the tomcat access interface appears

6. Publish project()
# Put the index.html page and WEB-INF folder in the test directory, and add the web.xml file under the folder
# index.html
<!DOCTYPE html>
<html lang="en">
    <meta charset="UTF-8">
    <title>Test access page</title>

# web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns=""

# Finally passed http://ip:port/test/    Access succeeded!

Publish your own image:

DockerHub Publish your own image on
# Address:

The server submits its own image:
1. Sign in(tangyuxiang:It's your own dockerhub ID)
# docker login -u tangyuxiang

2. Commit mirror
# Bid change signature: or another image
# docker tag image id image name: [tag]
# docker tag image id tangyuxiang/tangyuxiang:1.0
# docker tag image ID Docker ID / warehouse Name: new tag name (tag)

# docker push image name: [tag]
# docker push tangyuxiang/tomcat:1.0 

Alibaba cloud releases images
# Not written
# Image packaging / image decompression
# docker sava command
# docker load command

Docker network:

Network connectivity:
# Through the docker connect command, you can get through the network of another network segment from one container: it is equivalent to that this container has an IP in other network segments: public IP and Intranet IP

Conclusion: if you want to operate others across the network, you need to use docker network connect Connect!...

Actual combat: deploy Redis cluster:

SpringBoot microservice packaged Docker image:

1. Pack one locally springboot project jar package 

2. Write a Dockerfile file
FROM java:8

COPY *.jar /app.jar

CMD ["--server.port=8080"]


ENTRYPOINT ["java","-jar","/app.jar"]

3. Create a docker image (build)
# docker build -t tyx666 .

4. Start mirroring
# docker run -d -p 8082:8080 --name tangyuxiang666 tyx666

5. test
# curl localhost:port
# http://ip:port/path/

Tags: Java Operation & Maintenance Docker

Posted on Fri, 19 Nov 2021 13:50:56 -0500 by -twenty