Docker Compose - detailed parameter explanation

Introduction to Compose Compose is a tool for defining and running multi container Docker applications. With compose, yo...

Introduction to Compose
Compose is a tool for defining and running multi container Docker applications. With compose, you can use the YML file to configure all the services required by your application. Then, with one command, you can create and start all services from the YML configuration file.

If you don't know the YML file configuration, you can read the YAML introductory tutorial first.

Three steps used by Compose:

Use Dockerfile to define the environment of the application.

Use docker-compose.yml to define the services that make up the application so that they can run together in an isolated environment.

Finally, execute the docker compose up command to start and run the entire application.

The configuration case of docker-compose.yml is as follows (refer to the following for configuration parameters):

example

yaml configuration instance
version: '3' services: web: build: . ports: - "5000:5000" volumes: - .:/code - logvolume01:/var/log links: - redis redis: image: redis volumes: logvolume01: {}

Compose installation
Linux
On Linux, we can download its binary package from Github to use. The latest version address is: https://github.com/docker/compose/releases .

Run the following command to download the current stable version of Docker Compose:

$ sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

To install a different version of Compose, replace 1.24.1.

Apply executable permissions to binaries:

$ sudo chmod +x /usr/local/bin/docker-compose

To create a soft chain:

$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Test for successful installation:

$ docker-compose --version cker-compose version 1.24.1, build 4667896b

Note: for alpine, the following dependency packages are required: py pip, python dev, libffi dev, OpenSSL dev, gcc, libc dev, and make.

macOS
The Docker desktop version and Docker Toolbox of MAC already include Compose and other Docker applications, so Mac users do not need to install Compose separately. For Docker installation instructions, refer to the MacOS Docker installation.

windows PC
The Docker desktop version and Docker Toolbox of windows already include Compose and other Docker applications, so Windows users do not need to install Compose separately. Please refer to Windows Docker installation for Docker installation instructions.

use
1. Prepare
Create a test directory:

$ mkdir composetest $ cd composetest

Create a file named app.py in the test directory, and copy and paste the following:

composetest/app.py File code import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) def get_hit_count(): retries = 5 while True: try: return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return 'Hello World! I have been seen {} times.\n'.format(count)

In this example, redis is the host name of the redis container on the application network, and the port used by the host is 6379.

Create another file named requirements.txt in the composetest directory, as follows:

flask
redis
2. Create Dockerfile file
In the composetest directory, create a file named Dockerfile with the following contents:

FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP app.py ENV FLASK_RUN_HOST 0.0.0.0 RUN apk add --no-cache gcc musl-dev linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["flask", "run"]

Dockerfile content explanation:

FROM python:3.7-alpine: build the image from the Python 3.7 image.
WORKDIR /code: set the working directory to / code.

ENV FLASK_APP app.py ENV FLASK_RUN_HOST 0.0.0.0

Set the environment variable used by the flash command.

Run APK add -- no cache gcc musl dev Linux headers: install gcc so that Python packages such as MarkupSafe and SQLAlchemy can be compiled and accelerated.

COPY requirements.txt requirements.txt RUN pip install -r requirements.txt

Copy requirements.txt and install Python dependencies.

COPY.. COPY the current directory in the. Project to the working directory in the. Image.
CMD ["flash", "run]: the default execution command provided by the container is: Flash run.
3. Create docker-compose.yml
Create a file named docker-compose.yml in the test directory, and then paste the following:

docker-compose.yml configuration file

yaml configuration
version: '3' services: web: build: . ports: - "5000:5000" redis: image: "redis:alpine"

The Compose file defines two services: web and redis.

Web: this web service uses images built from the current Dockerfile directory. It then binds the container and host to the exposed port 5000. This sample service uses the default port 5000 of the flash web server.
Redis: this redis service uses the public redis image of Docker Hub.
4. Use the Compose command to build and run your application
In the test directory, start the application by executing the following command:

docker-compose up

If you want to execute the service in the background, you can add the - d parameter:

docker-compose up -d

yml configuration instruction reference
version
Specify which version of the composition this yml follows.

build
Specify the path to build the mirror context:

For example, the webapp service specifies the image built from the context path. / dir/Dockerfile:

version: "3.7" services: webapp: build: ./dir

Or, as an object with the path specified in the context, and optional Dockerfile and args:

version: "3.7" services: webapp: build: context: ./dir dockerfile: Dockerfile-alternate args: buildno: 1 labels: - "com.example.description=Accounting webapp" - "com.example.department=Finance" - "com.example.label-with-empty-value" target: prod

Context: context path.
Dockerfile: Specifies the dockerfile file name to build the image.
args: add build parameters, which are environment variables that can only be accessed during the build process.
labels: sets the label for building the image.
target: multi-layer construction. You can specify which layer to build.
cap_add,cap_drop
Add or remove the kernel functions of the host that the container has.

cap_add: - ALL # Open all permissions cap_drop: - SYS_PTRACE # Close ptrace permission

cgroup_parent
Specifying the parent cgroup group for the container means that the resource restrictions of the group will be inherited.

cgroup_parent: m-executor-abcd
command
Overrides the default command that the container starts.

command: ["bundle", "exec", "thin", "-p", "3000"]
container_name
Specify a custom container name instead of the generated default name.

container_name: my-web-container
depends_on
Set dependencies.

Docker compose up: start services in dependency order. In the following example, start db and redis before starting the web.
Docker compose up SERVICE: automatically include SERVICE dependencies. In the following example, docker compose up Web will also create and start db and redis.
Docker compose stop: stop services in dependency order. In the following example, the web stops before db and redis.

version: "3.7" services: web: build: . depends_on: - db - redis redis: image: redis db: image: postgres

Note: the web service will not start until redis db is fully started.

deploy
Specify the configuration related to the deployment and operation of the service. It is only useful in swarm mode.

version: "3.7" services: redis: image: redis:alpine deploy: mode: replicated replicas: 6 endpoint_mode: dnsrr labels: description: "This redis service label" resources: limits: cpus: '0.50' memory: 50M reservations: cpus: '0.25' memory: 20M restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s

Optional parameters:

endpoint_mode: the way to access cluster services.

endpoint_mode: vip

Docker cluster serves an external virtual ip. All requests will reach the machines inside the cluster service through this virtual ip.

endpoint_mode: dnsrr

DNS polling (DNSRR). All requests will automatically poll to obtain an ip address in the cluster ip list.

Labels: set labels on the service. You can overwrite the labels under deploy with the labels on the container (the configuration at the same level as deploy).

Mode: Specifies the mode provided by the service.

replicated: replication service, which replicates the specified service to the cluster machine.

Global: global service, which will be deployed to each node of the cluster.

Illustration: the yellow box in the figure below is the operation of replicated mode, and the gray box is the operation of global mode.

replicas: when the mode is replicated, you need to use this parameter to configure the number of running nodes.

Resources: configure the resource usage limit of the server. For example, in the above example, configure the percentage of cpu and memory required for the operation of the redis cluster. Avoid using too much resources and exceptions.

restart_policy: configure how to restart the container when exiting the container.

condition: none, on failure or any (default: any).
delay: set how long to restart (default: 0).
max_attempts: the number of attempts to restart the container. If the number exceeds, no more attempts will be made (default: always retry).
window: sets the container restart timeout (default: 0).
rollback_config: configure how the service should be rolled back if the update fails.

parallelism: the number of containers rolled back at one time. If set to 0, all containers will be rolled back at the same time.
delay: the waiting time between rollback of each container group (the default is 0s).
failure_action: what to do if rollback fails. One of them is continue or pause (default pause).
monitor: the time (ns|us|ms|s|m|h) after each container is updated to continuously observe whether it fails (0s by default).
max_failure_ratio: the tolerable failure rate during rollback (0 by default).
Order: the order of operations during rollback. One of them is stop first (serial rollback), or start first (parallel rollback) (default stop first).
update_config: configure how the service should be updated, which is useful for configuring rolling updates.

parallelism: the number of containers updated at one time.
delay: the time to wait between updating a set of containers.
failure_action: what to do if the update fails. One of them is continue, rollback or pause (default: pause).
monitor: the time (ns|us|ms|s|m|h) after each container is updated to continuously observe whether it fails (0s by default).
max_failure_ratio: the tolerable failure rate during the update process.
Order: the order of operations during rollback. One of them is stop first (serial rollback), or start first (parallel rollback) (default stop first).
Note: only V3.4 and later versions are supported.

devices
Specifies the device mapping list.

devices: - "/dev/ttyUSB0:/dev/ttyUSB0" dns

Custom DNS server, which can be a single value or multiple values of a list.

dns: 8.8.8.8 dns: - 8.8.8.8 - 9.9.9.9

dns_search
Custom DNS search domain. It can be a single value or a list.

dns_search: example.com dns_search: - dc1.example.com - dc2.example.com entrypoint

Override the container's default entrypoint.

entrypoint: /code/entrypoint.sh
It can also be in the following format:

entrypoint: - php - -d - zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so - -d - memory_limit=-1 - vendor/bin/phpunit env_file

Add environment variables from the file. It can be a single value or multiple values of a list.

env_file: .env
It can also be a list format:

env_file: - ./common.env - ./apps/web.env - /opt/secrets.env environment

Add environment variables. You can use arrays or dictionaries, any Boolean values that need to be enclosed in quotation marks to ensure that the YML parser does not convert them to True or False.

environment: RACK_ENV: development SHOW: 'true' expose

The exposed port, but not mapped to the host, is only accessed by the connected service.

Only internal ports can be specified as parameters:

expose: - "3000" - "8000

"
extra_hosts
Add host name mapping. Similar to docker client -- add host.

extra_hosts: - "somehost:162.242.195.82" - "otherhost:50.31.209.229"

The above will create a mapping relationship with ip address and host name in / etc/hosts in the internal container of this service:

162.242.195.82 somehost
50.31.209.229 otherhost
healthcheck
It is used to detect whether the docker service is running healthily.

healthcheck:
test: [“CMD”, “curl”, “-f”, “ http://localhost ”]# set up the detection program
interval: 1m30s # set detection interval
timeout: 10s # sets the detection timeout
retries: 3 # set the number of retries
start_ Period: after 40s # startup, how many seconds to start the detection program
image
Specifies the mirror that the container runs. The following formats are available:

image: redis image: ubuntu:14.04 image: tutum/influxdb image: example-registry.com:4000/postgresql image: a4bc65fd # Mirror id

logging
Logging configuration for the service.

Driver: Specifies the logging driver of the service container. The default value is JSON file. There are three options

driver: "json-file" driver: "syslog" driver: "none"

Only under the JSON file driver, the following parameters can be used to limit the number and size of logs.

logging: driver: json-file options: max-size: "200k" # The size of a single file is 200k max-file: "10" # Up to 10 files

When the upper limit of the file limit is reached, the old file will be automatically deleted.

Under the syslog driver, you can use syslog address to specify the log receiving address.

logging: driver: syslog options: syslog-address: "tcp://192.168.0.42:123" network_mode Set the network mode. network_mode: "bridge" network_mode: "host" network_mode: "none" network_mode: "service:[service name]" network_mode: "container:[container name/id]" networks

Configure the network to which the container is connected, and reference the entry under the top-level networks.

services: some-service: networks: some-network: aliases: - alias1 other-network: aliases: - alias2 networks: some-network: # Use a custom driver driver: custom-driver-1 other-network: # Use a custom driver which takes special options driver: custom-driver-2

aliases: other containers on the same network can use the service name or this alias to connect to the service of the corresponding container.

restart
no: This is the default restart policy. The container will not be restarted under any circumstances.
Always: the container always restarts.
On failure: the container will be restarted only when the container exits abnormally (the exit status is not 0).
Unless stopped: always restart the container when it exits, but do not consider the container that has been stopped when the Docker daemon starts
restart: "no"
restart: always
restart: on-failure
restart: unless-stopped
Note: for swarm cluster mode, please use restart instead_ policy.

secrets
Store sensitive data, such as passwords:

version: "3.1" services: mysql: image: mysql environment: MYSQL_ROOT_PASSWORD_FILE: /run/secrets/my_secret secrets: - my_secret secrets: my_secret: file: ./my_secret.txt

security_opt
Modify the default schema label of the container.

security-opt:

  • label:user:USER # sets the user label of the container
  • label:role:ROLE # sets the role label of the container
  • label:type:TYPE # sets the security policy label of the container
  • label:level:LEVEL # sets the security level label of the container
    stop_grace_period
    Specify how long to wait before sending SIGKILL signal to close the container when the container cannot process SIGTERM (or any stop_signal).

stop_grace_period: 1s # wait for 1s
stop_grace_period: 1m30s # wait 1 minute 30 seconds
The default wait time is 10 seconds.

stop_signal
Sets the override signal to stop the container. SIGTERM is used by default.

In the following example, SIGUSR1 is used instead of the signal SIGTERM to stop the container.

stop_signal: SIGUSR1
sysctls
To set kernel parameters in the container, you can use array or dictionary format.

sysctls: net.core.somaxconn: 1024 net.ipv4.tcp_syncookies: 0 sysctls: - net.core.somaxconn=1024 - net.ipv4.tcp_syncookies=0

tmpfs
Install a temporary file system in the container. It can be a single value or multiple values of a list.

tmpfs: /run tmpfs: - /run - /tmp

ulimits
Override the container's default ulimit.

ulimits: nproc: 65535 nofile: soft: 20000 hard: 40000

volumes
Mount the host's data volume or file into the container.

version: "3.7" services: db: image: postgres:latest volumes: - "/localhost/postgres.sock:/var/run/postgres/postgres.sock" - "/localhost/data:/var/lib/postgresql/data"

27 October 2021, 21:52 | Views: 3355

Add new comment

For adding a comment, please log in
or create account

0 comments