Docker Compose
- When we used docker earlier, we defined the Dockerfile file, and then used the docker build, docker run and other commands to operate the container. However, the application system of microservice architecture generally includes several microservices, and each microservice generally deploys multiple instances. If each microservice needs to be started and stopped manually, it can be imagined that the efficiency is low and the maintenance is large. Docker Compose can be used as an easy and efficient management container. It is an application tool for defining and running multi container docker
- See the following figure for Docker and Compose compatibility:
Description of Docker version change:
- Docker starts from version 1.13.x, which is divided into enterprise EE and Community CE. The version number is also changed to be released according to the timeline. For example, 17.03 is March 2017.
- Docker's linux distribution software warehouse from the previous https://apt.dockerproject.org and https://yum.dockerproject.org Change to current https://download.docker.com , the package name is changed to docker CE and docker EE.
What is docker compose
- Compose is a tool for defining and managing multiple containers, written in Python language.
- Use the Compose configuration file to describe the architecture of multiple container applications, such as what images, data volumes, networks, mapping ports, etc. are used;
- Then a command manages all services, such as start, stop, restart, etc.
docker compose function:
- Now we need to deploy django project, such as nginx+mysql+redis+nginx, etc
- We need to open four docker containers to deploy each component. If the management of each container is too complex, and it may be a deployment project for customers
- docker compose is a tool that can manage multiple docker containers in a project at the same time. It can be deployed and started with one click
docker compose installation:
Method 1: Download and install docker compose
[root@linux-node1 ~]# curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
[root@linux-node1 ~]# chmod +x /usr/local/bin/docker-compose #perhaps pip install docker-compose
Method 2: directly decompress and install
unzip docker-compose-linux-x86_64.zip # There is only one file after decompression docker-compose chmod +x docker-compose mv docker-compose /usr/bin/
YAML file format and preparation precautions:
- Note: docker compose uses yaml files to describe containers
- YAML is a markup language, a very intuitive data serialization format with high readability. Similar to XML data description language, the syntax is much simpler than XML.
- YAML data structures are represented by indentation, continuous items are represented by minus signs, key value pairs are separated by colons, arrays are enclosed in square brackets, and hash is enclosed in curly brackets.
YAML file format considerations:
- tab indentation is not supported. Space indentation is required
- Usually the beginning is indented by 2 spaces
- Indent 1 space after characters, such as colon, comma, bar
- Annotate with pound marks
- If it contains special characters, enclose them in single quotation marks
- Boolean values (true, false, yes, no, on, off) must be enclosed in quotation marks so that the parser will interpret them as strings.
compose configuration:
# field describe
build # appoint Dockerfile File name build mirror context path dockerfile context image # Specify mirror command # Execute the command to override the default command container_name # Specify the container name. Since the container name is unique, if you specify a custom name, you cannot scale deploy # Specify the configuration related to deploying and running the service, which can only be used in Swarm Mode use environment # Add environment variable networks # Join the network and reference the top level networks Next entry ports # Exposed ports, and-p Same, but the port cannot be lower than 60 volumes # Mount the host path or named volume. If the named volume is at the top level volumes Define volume name restart # Restart policy, default no,always|on-failure|unless-staopped hostname # Container host name
Common commands:
# field describe build # Rebuild service ps # List containers up # Create and launch containers exec # Execute commands in containers scale # Specify the number of service container starts top # Show container processes logs # View container output down # Delete containers, networks, data volumes, and mirrors stop # stop it start # start-up restart # Restart service
# At the end of the article, there is a detailed introduction
Deploy LNMP website platform with Docker Compose
Reference official: https://docs.docker.com/compose/compose-file/
Project address: https://gitee.com/edushiyanlou/django-docker
One click deployment LNMP file structure
├── docker-compose.yml # compose of yml The file describes the container to build ├── mysql │ ├── conf │ │ └── my.cnf # my.cnf : mysql Master profile │ └── data # data The data directory, when built, will mysql Data is persisted to the host ├── nginx │ ├── Dockerfile # structure nginx Container Dockerfile file │ ├── nginx-1.12.1.tar.gz │ └── nginx.conf # nginx Master profile for ├── php │ ├── Dockerfile # structure php Container Dockerfile file │ ├── php-5.6.31.tar.gz │ └── php.ini └── wwwroot └── index.php # Site root
docker-compose.yml
version: '3' services: nginx: hostname: nginx build: context: ./nginx dockerfile: Dockerfile ports: - 81:80 networks: - lnmp volumes: - ./wwwroot:/usr/local/nginx/html php: hostname: php build: context: ./php dockerfile: Dockerfile networks: - lnmp volumes: - ./wwwroot:/usr/local/nginx/html mysql: hostname: mysql image: mysql:5.6 ports: - 3306:3306 networks: - lnmp volumes: - ./mysql/conf:/etc/mysql/conf.d - ./mysql/data:/var/lib/mysql command: --character-set-server=utf8 environment: MYSQL_ROOT_PASSWORD: 123456 MYSQL_DATABASE: wordpress MYSQL_USER: user MYSQL_PASSWORD: user123 networks: lnmp:
mysql/conf/my.conf
[mysqld] user=mysql port=3306 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock pid-file=/var/run/mysql/mysql.pid log_error=/var/log/mysql/error.log character_set_server = utf8 max_connections=3600
nginx/Dockerfile
FROM centos:7 MAINTAINER www.aliangedu.com RUN yum install -y gcc gcc-c++ make openssl-devel pcre-devel ADD nginx-1.12.1.tar.gz /tmp RUN cd /tmp/nginx-1.12.1 && \ ./configure --prefix=/usr/local/nginx && \ make -j 2 && \ make install RUN rm -rf /tmp/nginx-1.12.1* && yum clean all COPY nginx.conf /usr/local/nginx/conf WORKDIR /usr/local/nginx EXPOSE 80 CMD ["./sbin/nginx", "-g", "daemon off;"]
nginx/nginx.conf
FROM centos:7 MAINTAINER www.aliangedu.com RUN yum install -y gcc gcc-c++ make openssl-devel pcre-devel ADD nginx-1.12.1.tar.gz /tmp RUN cd /tmp/nginx-1.12.1 && \ ./configure --prefix=/usr/local/nginx && \ make -j 2 && \ make install RUN rm -rf /tmp/nginx-1.12.1* && yum clean all COPY nginx.conf /usr/local/nginx/conf WORKDIR /usr/local/nginx EXPOSE 80 CMD ["./sbin/nginx", "-g", "daemon off;"] [root@linux-node2 compose_lnmp]# cat nginx/nginx.conf user root; worker_processes auto; error_log logs/error.log info; pid logs/nginx.pid; events { use epoll; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; root html; index index.html index.php; location ~ \.php$ { root html; fastcgi_pass php:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } }
php/Dockerfile
FROM centos:7 MAINTAINER www.aliangedu.com RUN yum install -y gcc gcc-c++ make gd-devel libxml2-devel libcurl-devel libjpeg-devel libpng-devel openssl-devel ADD php-5.6.31.tar.gz /tmp/ RUN cd /tmp/php-5.6.31 && \ ./configure --prefix=/usr/local/php \ --with-config-file-path=/usr/local/php/etc \ --with-mysql --with-mysqli \ --with-openssl --with-zlib --with-curl --with-gd \ --with-jpeg-dir --with-png-dir --with-iconv \ --enable-fpm --enable-zip --enable-mbstring && \ make -j 4 && \ make install && \ cp /usr/local/php/etc/php-fpm.conf.default /usr/local/php/etc/php-fpm.conf && \ sed -i "s/127.0.0.1/0.0.0.0/" /usr/local/php/etc/php-fpm.conf && \ sed -i "21a \daemonize = no" /usr/local/php/etc/php-fpm.conf COPY php.ini /usr/local/php/etc RUN rm -rf /tmp/php-5.6.31* && yum clean all WORKDIR /usr/local/php EXPOSE 9000 CMD ["./sbin/php-fpm", "-c", "/usr/local/php/etc/php-fpm.conf"]
wwwroot/index.php
<?php phpinfo();?>
Execute the one click deployment command:
docker-compose -f docker-compose.yml up docker-compose -f docker-compose.yml up -d # -d Parameter allows the program to run in the background
Description:
http://192.168.0.211:81/ # Run the above command to open the page php page
Note: if you want to use other pages, you can directly replace the index.php asking price in wwwroot
Description of docker-compose.yml
- Here, three containers for deploying LNMP environment are defined, including Nginx, php and mysql
- nginx and php are built from scratch using our own defined dockerfile, and mysql is built directly using the official warehouse
version: '3' # cocker compose Version number services: # Top level profile nginx: # Service name, which can be used to manage containers hostname: nginx # docker Container host name build: # structure nginx container context: ./nginx # Specifies the location of the environment in the current directory nginx Folder dockerfile: Dockerfile # Specify use nginx In folder Dockerfile Build ports: # Exposed port - 81:80 # Map port 81 of the host to port 80 of the container networks: # Network used by the container - lnmp volumes: # Specifies that the container data volume is mounted on the host path - ./wwwroot:/usr/local/nginx/html php: hostname: php build: context: ./php dockerfile: Dockerfile networks: - lnmp volumes: - ./wwwroot:/usr/local/nginx/html mysql: hostname: mysql image: mysql:5.6 # Direct reference mysql Official image warehouse ports: - 3306:3306 networks: - lnmp volumes: - ./mysql/conf:/etc/mysql/conf.d - ./mysql/data:/var/lib/mysql command: --character-set-server=utf8 # receive mysql Commands, such as setting here mysql Character set for environment: MYSQL_ROOT_PASSWORD: 123456 MYSQL_DATABASE: wordpress MYSQL_USER: user MYSQL_PASSWORD: user123 networks: lnmp: # Create a network
The difference between docker, docker compose, docker swarm and k8s
Docker-Compose
- Docker compose is used to manage your containers. Imagine that when hundreds of containers in your docker need to be started, it will take more time to start one by one.
- With docker compose, you only need to write a file in which you declare the container to be started and configure some parameters
- After executing this file, Docker will start all containers according to your declared configuration. Just Docker compose up can start all containers
- However, Docker compose can only manage dockers on the current host, that is, it cannot start Docker containers on other hosts
Docker Swarm
- Docker Swarm is a tool used to manage docker containers on multiple hosts. It can help you start the container and monitor the status of the container
- If the state of the container is abnormal, it will help you restart a new container to provide services and provide load balancing between services
Kubernetes
- Kubernetes has the same role positioning as Docker Swarm, which is a cross host container management platform
- k8s is a container management platform developed by Google based on its many years of operation and maintenance experience, while Docker Swarm is developed by Docker.
Core role: rapid iteration and service self-healing
Project environment introduction
Docker compose deployment django+nginx+uwsgi+celery+redis+mysql
Project diagram
Description of project directory structure
Project address: https://gitee.com/edushiyanlou/django-docker
django-docker ## Project root path │ .gitignore # git Ignore files not uploaded │ docker-compose.yml # docker-compose file │ Dockerfile # deploy django Project dockerfile │ README.md # project Readme explain │ requirements.txt # Files that the project must install │ ├─nginx ## nginx container configuration file │ │ nginx.conf # /etc/nginx/nginx.conf configuration file │ │ │ └─conf # /etc/nginx/conf.d to configure nginx folder │ default.conf │ └─web ## web container for deploying django projects │ manage.py │ uwsgi.ini # django Project uwsgi configuration file │ ├─demoapp │ │ admin.py │ │ apps.py │ │ models.py │ │ tasks.py # to configure celery Task file │ │ tests.py │ │ urls.py │ │ views.py │ │ __init__.py │ │ │ ├─migrations │ │ __init__.py # introduce celery │ │ │ └─templates │ └─demoapp │ celery_detail.html # View specific celery Execution result page │ celery_index.html # View corresponding celery Task page │ index.html # Project main page │ └─web celery.py # celery configuration file settings.py urls.py wsgi.py __init__.py
Project document description
Initialize a django project
- Project documents
urls.py
from django.contrib import admin from django.urls import path, include urlpatterns = [ path('', include('demoapp.urls')), path('admin/', admin.site.urls), ]
demoapp/urls.py
from django.urls import path from . import views app_name = 'demoapp' urlpatterns = [ path('', views.index, name='index'), path('celery/', views.celery_index, name='celery_index'), path('celery/random_add/', views.random_add, name='celery_random_add'), path('celery/random_mul/', views.random_mul, name='celery_random_mul'), path('celery/random_xsum/', views.random_xsum, name='celery_random_xsum'), ]
demoapp/views.py
import random from django.shortcuts import render from . import tasks def index(request): context = {} return render(request, 'demoapp/index.html', context) def celery_index(request): context = {} return render(request, 'demoapp/celery_index.html', context) def random_add(request): a, b = random.choices(range(100), k=2) tasks.add.delay(a, b) context = {'function_detail': 'add({}, {})'.format(a, b)} return render(request, 'demoapp/celery_detail.html', context) def random_mul(request): a, b = random.choices(range(100), k=2) tasks.mul.delay(a, b) context = {'function_detail': 'mul({}, {})'.format(a, b)} return render(request, 'demoapp/celery_detail.html', context) def random_xsum(request): array = random.choices(range(100), k=random.randint(1, 10)) tasks.xsum.delay(array) context = {'function_detail': 'xsum({})'.format(array)} return render(request, 'demoapp/celery_detail.html', context)
- celery profile
web/__init__.py
# This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery import app as celery_app __all__ = ['celery_app']
web/celery.py
import os from celery import Celery # As long as you want to access it in your own script Django The database and other files must be configured Django Environment variables for os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'web.settings') # app name app = Celery('web') # to configure celery class Config: BROKER_URL = 'redis://redis:6379' # redis://127.0.0.1:6379 CELERY_RESULT_BACKEND = 'redis://redis:6379' app.config_from_object(Config) # To each APP Auto discovery in tasks.py file app.autodiscover_tasks()
demoapp/tasks.py
# Create your tasks here from celery import shared_task @shared_task def add(x, y): return x + y @shared_task def mul(x, y): return x * y @shared_task def xsum(numbers): return sum(numbers)
nginx container related configuration files:
django-docker\nginx\nginx.conf
user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }
django-docker\nginx\conf\default.conf
server { listen 80; server_name localhost; charset utf-8; client_max_body_size 10M; location /static/ { alias /django_static/; } location / { include uwsgi_params; uwsgi_pass web:8000; } }
web configuration file:
django-docker\Dockerfile
FROM python:3 ENV PYTHONUNBUFFERED=1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple # ADD . /code/
django-docker\web\uwsgi.ini
[uwsgi] socket=:8000 chdir=/code/web module=web.wsgi:application pidfile=/tmp/web-master.pid master=True vacuum=True processes=1 max-requests=5000
docker-compose.yml file:
docker-compose.yml
version: '3' services: mysql: image: mysql:5.7 volumes: - ./mysql:/var/lib/mysql expose: - "3306" restart: always environment: - MYSQL_ROOT_PASSWORD=root - MYSQL_DATABASE=djangodocker - MYSQL_USER=django - MYSQL_PASSWORD=django nginx: image: nginx:alpine volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./nginx/conf:/etc/nginx/conf.d - ./web/staticfiles:/django_static ports: - "80:80" depends_on: - web redis: image: redis:alpine expose: - "6379" restart: always web: build: . # command: python manage.py runserver 0:8000 # ports: # - "8000:8000" command: uwsgi --ini uwsgi.ini working_dir: /code/web volumes: - .:/code expose: - "8000" depends_on: - mysql - redis celery: build: . command: celery -A web worker -l info working_dir: /code/web volumes: - .:/code depends_on: - mysql - redis
Detailed explanation of docker-compose.yml
docker-compose.yml details:
version: '3' # cocker compose Version number services: # Top level profile mysql: # service name: Container construction, communication and management image: mysql:5.7 # Introduction of official mysql image volumes: - ./mysql:/var/lib/mysql # Put the in the current folder ./mysql Folder mounted to docker container /var/lib/mysql Under path expose: - "3306" # Expose port 3306 of the current container to link Container to this container restart: always # The host computer restarts and automatically pulls up this docker container environment: - MYSQL_ROOT_PASSWORD=root # mysql The server root password root - MYSQL_DATABASE=djangodocker # Create database djangodocker - MYSQL_USER=django # Create a user django - MYSQL_PASSWORD=django # User password is django nginx: image: nginx:alpine volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./nginx/conf:/etc/nginx/conf.d - ./web/staticfiles:/django_static ports: - "80:80" # Bind port 80 of the container to port 80 of the host depends_on: - web # It must be started first web The container can only be started nginx container redis: image: redis:alpine expose: - "6379" restart: always web: build: . # command: python manage.py runserver 0:8000 # ports: # - "8000:8000" command: uwsgi --ini uwsgi.ini # start-up uwsgi command working_dir: /code/web # Project work path volumes: - .:/code # Mount all files in the current folder to the container /code folder expose: - "8000" depends_on: # must mysql and reids The container cannot be started until it is started web container - mysql - redis celery: build: . command: celery -A web worker -l info working_dir: /code/web volumes: - .:/code depends_on: - mysql - redis
Similar instruction comparison
'''1. expose And ports compare''' # ports: Bind the port of the container to the port of the host so that it can be accessed on the Internet docker Container services # expose: Expose port 3 of the current container to link To this container, expose Ports are not exposed to hosts '''2. depends_on And links difference''' # depends_on: The container on which this container depends must be started first # links: Ensure that the container if ip Changes can also be accessed(Basically deprecated because it is not used link It can still be accessed through the container name)
Build can specify the path containing the build context:
version: '2' services: webapp: build: ./dir
Compose common service configuration reference
- The Compose file is a YAML file that defines services, networks, and volumes. The default file name of the Compose file is docker-compose.yml
- Tip: you can use the. yml or. yaml extension for this file. They all work.
- As with docker operation, by default, the options specified in Dockerfile (for example, CMD, EXPOSE, VOLUME, ENV) are complied with. You do not need to specify them again in docker-compose.yml.
- At the same time, you can use Bash like ${VARIABLE} syntax to use environment variables in configuration values. For details, see Variable replacement.
- This section contains all the configuration options supported by the service definition in version 3.
build
Build can specify the path containing the build context:
version: '2' services: webapp: build: ./dir
Or, as an object, the object has the context path, the specified Dockerfile file and the args parameter value:
version: '2' services: webapp: build: context: ./dir dockerfile: Dockerfile-alternate args: buildno: 1
The webapp service will build the container image through the dockerfile alternate file in the. / dir directory.
If you specify both image and build, compose will build the container image through the directory specified in build, and the name of the built image is the image name and label specified in image.
build: ./dir
image: webapp:tag
This will be a mirror named webapp and marked tag built by. / dir.
context
- The directory path containing the Dockerfile file, or the URL of the git repository.
- When the value provided is a relative path, it is interpreted as the location relative to the current compose file. This directory is also the context sent to the Docker daemon to build the image.
dockerfile
- Alternate Docker file. Compose will use alternate files to build. You must also specify a build path.
args
- Add the parameters of the build image. Environment variables can only be accessed during the build process.
First, specify the parameters to be used in Dockerfile:
ARG buildno ARG password RUN echo "Build number: $buildno" RUN script-requiring-password.sh "$password"
Then specify the parameter under the args key. You can pass mappings or lists:
build: context: . args: buildno: 1 password: secret build: context: . args: - buildno=1 - password=secret
Note: YAML Boolean values (true, false, yes, no, on, off) must be enclosed in quotation marks so that the parser can interpret them as strings.
image
Specifies the image of the boot container, which can be the image warehouse / label or the image id (or the previous part of the id)
image: redis image: ubuntu:14.04 image: tutum/influxdb image: example-registry.com:4000/postgresql image: a4bc65fd
If the image does not exist, Compose will try to pull it down from the official image warehouse. If you also specify build, in this case, it will build it with the specified build option and mark it with the name and tag specified by image.
container_name
Specify a custom container name instead of the generated default name.
container_name: my-web-container
Since the Docker container name must be unique, if a custom name is specified, the service cannot be extended to multiple containers.
volumes
Volume mount path settings. You can set the host path (HOST:CONTAINER) or add the access mode (HOST:CONTAINER:ro). The default permission to mount the data volume is read-write (rw), which can be specified as read-only through ro.
You can mount a relative path on the host, which will be extended relative to the directory of the composition configuration file currently in use. Relative paths should always start with. Or.
volumes: # Just specify a path and let the engine create a volume - /var/lib/mysql # Specify absolute path mapping - /opt/data:/var/lib/mysql
# Relative to current compose Relative path to file - ./cache:/tmp/cache
# User home directory relative path - ~/configs:/etc/configs/:ro # Named volume - datavolume:/var/lib/mysql
However, if you want to reuse the mounted volume across multiple services, please name the mounted volume in the top-level volumes keyword, but it is not mandatory. The following example also has the function of reusing the mounted volume, but it is not advocated.
version: "3" services: web1: build: ./web/ volumes: - ../code:/opt/web/code web2: build: ./web/ volumes: - ../code:/opt/web/code
Note: defining a mount volume through top-level volumes and referencing it from the volume list of each service will replace the volumes in the earlier version of Compose file format_ from.
version: "3" services: db: image: db volumes: - data-volume:/var/lib/db backup: image: backup-service volumes: - data-volume:/var/lib/backup/data volumes: data-volume:
command
Overrides the default command when the container is started.
command: bundle exec thin -p 3000
This command can also be a list similar to dockerfile:
command: ["bundle", "exec", "thin", "-p", "3000"]
links
Link to a container in another SERVICE. Please specify the SERVICE name and link ALIAS (SERVICE: ALIAS), or only the SERVICE name.
web: links: - db - db:database - redis
- In the container of the current web service, the database application in the db container can be accessed through the alias database of the linked db service. If no alias is specified, it can be accessed directly using the service name.
- Links do not need to enable services to communicate - by default, any service can reach any other service under the name of that service. (actually, the communication between containers is realized by setting the domain name resolution of / etc/hosts. Therefore, the alias of the service can be used to link the services of other containers like localhost in the application, provided that multiple service containers can be routed and connected in one network.)
- links can also function as and dependencies_ On has similar functions, that is, it defines the dependencies between services, so as to determine the order of service startup.
external_links
- Link to a container outside docker-compose.yml, or even a container managed by Compose. The parameter format is similar to links.
external_links: - redis_1 - project_db_1:mysql - project_db_1:postgresql
expose
- The exposed port, but not mapped to the host, is only accessed by the connected service.
- Only internal ports can be specified as parameters
expose: - "3000" - "8000"
ports
- Expose port information.
- Common simple formats: use the HOST:CONTAINER format or just specify the port of the container (the host will randomly select the port).
- Note: when using HOST:CONTAINER format to map ports, you may get wrong results if the container port you use is less than 60, because YAML will parse xx:yy as base 60. Therefore, it is recommended to use string format.
Simple short format:
ports: - "3000" - "3000-3005" - "8000:8000" - "9090-9091:8080-8081" - "49100:22" - "127.0.0.1:8001:8001" - "127.0.0.1:5000-5010:5000-5010" - "6060:6060/udp"
In v3.2, the long format syntax of ports allows the configuration of additional fields that cannot be represented in short format.
Long format:
ports: - target: 80 published: 8080 protocol: tcp mode: host
target: port in container
published: port of the physical host
Protocol: port protocol (tcp or udp)
Mode: host and inress are two general modes. Host is used to publish host ports on each node, and inress is used for swarm mode ports that are load balanced.
restart
no is the default restart policy, and the container will not be restarted under any circumstances. When always is specified, the container always restarts. If the exit code indicates a failure error, on failure restarts the container.
restart: "no" restart: always restart: on-failure restart: unless-stopped
environment
- Add environment variables. You can use arrays or dictionaries. Any Boolean value; True, false, yes, no need to be enclosed in quotation marks to ensure that they are not converted to true or false by the YML parser.
- A variable with only a given name will automatically obtain its value on the Compose host, which can be used to prevent unnecessary data disclosure.
environment: RACK_ENV: development SHOW: 'true' SESSION_SECRET: environment: - RACK_ENV=development - SHOW=true - SESSION_SECRET
- Note: if your service specifies the build option, the environment variables defined through the environment during the build process will not work. The args sub option of build will be used to define the environment variables at build time.
pid
Set PID mode to host PID mode. This opens the shared PID address space between the container and the host operating system. Containers started with this flag will be able to access and manipulate other containers in the bare metal namespace, and vice versa. That is, containers that open this option can access and operate on each other through process IDs.
pid: "host"
dns
Configure DNS server. It can be a value or a list.
dns: 8.8.8.8
dns:
- 8.8.8.8
- 9.9.9.9