Docker Dig Into Series | Swarm Multi-Node Actual

Docker has been on the market for many years, which is not a new thing. Many enterprises or development classmates have ...
What is Docker Swarm
Why use Swarm
Service Discovery Mechanisms
Load Balancing Mechanism Routing Mesh
Swarm mode operation mechanism
Actual Target
Set up virtual machine nodes
Set up swarm cluster
Build Project
Reference Reference
Common directives for Docker Swarm

Docker has been on the market for many years, which is not a new thing. Many enterprises or development classmates have not had many contacts before, but there are not many people with practical experience. This series of tutorials mainly focus on actual operations, try to talk about dry goods, will be explained according to their own understanding. Specific official concepts can be consulted in the official tutorial, because this series of tutorials depend on the previous chapter.It is recommended that you start with the previous chapters.

Navigation for this series of tutorials:
Docker Understanding Series | Container First Experience
Docker Understanding Series | Image Initial Experience
Docker Deep Shallow Series|Single Node Multi-Container Network Communication
Docker Shallow Series|Container Data Persistence
Docker Understanding Series | Single-machine Nginx+Springboot Actual
Docker Understanding Series | Docker Compose Multi-Container Actual

Tutorial purposes:

  • Learn what docker swarm is & why to use it
  • Understanding the docker swarm network model
  • Understanding the core implementation mechanisms in the swarm model
  • Learn how to define and manage services through a docker compose file
  • Learn how to create a service using a docker compose file
  • Understanding the basic commands of docker stack
  • Understanding the basic commands of docker service
  • Master docker swarm application in battle
preparation in advance

1. Download mysql

docker pull mysql

2. Download nginx

docker pull nginx

3. Clone credit-facility-service as a later deployment demonstration using docker branch

git clone https://github.com/EvanLeung08/credit-facility-service.git

4. Installation of virtual machines, centos, and docker environments Check Chapter 1, which already has CentOS and docker installed by default
Docker Understanding Series | Container First Experience

Swarm Basic Concepts

What is Docker Swarm


Simply put, Docker Swarm is a container orchestration tool that manages container clusters on clusters of multiple physical hosts or virtual machines. It is responsible for orchestration, scheduling, and cluster management. Cluster activities are controlled by cluster managers. The machines joining the cluster are called nodes, allowing users to manage multiple containers deployed across multiple hosts.

SwarmKit is a node discovery, Raft-based consensus, task scheduling, primitive-based organization toolkit for scalable distributed systems that uses Raft consensus algorithm To coordinate and make decisions on distributed systems.

Here are some key terms for Docker Swarm:

  • Node: In orchestration, a node is the host.A node can be multiple VM s in a single host.

  • Manager Node: This node is responsible for maintaining Swarm orchestration, which manages the cluster environment.

  • Worker Node: This node is responsible for performing tasks defined by the management node.It will always notify the Manager node of its status and provide the services assigned to it.

  • Services: These are tasks that are executed on the Manager or Worker nodes and can be understood as a bunch of identical running tasks that make up a service.

  • Task: Task contains a Docker container and commands that run inside the container. It is swarm's atomic dispatch unit. If a task runs, the coordinator creates a new replica task, which generates a new container

Why use Swarm

DockerSwarm and k8s are the mainstream technology of container arrangement at present, but most enterprises in the market use k8s for container cluster management. k8s is backed by the big tree of Google and the open source community is very active. With the rapid development of cloud manufacturers over the years, k8s is the future trend. I personally recommend using k8s for container arrangement and management in real projects, but here is docker, I won't say much about it yet, but I'll cover it later in the k8s topic.

Without such multi-machine container management technology as Swarm, it would be difficult to manage containers, and there would be no way for containers to communicate across machines.And DockerSwarm makes it easy for users to publish and manage applications on multiple machines, and we don't need to pay attention to which node each container instance falls on. swarm exposes our applications as services, and has built-in service discovery and load balancing to make container clusters running on multiple nodes feel as simple as just one application running and easy to implementExpansion and automatic fault tolerance (a container for a swarm task automatically expands a new container when it crashes).Swarm clusters typically have several worker nodes and at least one manager node responsible for efficiently processing the resources of the worker nodes and ensuring that the cluster runs effectively, which improves application availability.

Swarm's Network Model


The following three network concepts are important for Swarm Cluster Services:

  • Overlay - Overlay network manages communication between the Docker daemons participating in the cluster.You can create an Overlay network in the same way as a user-defined network in a separate container.You can also attach services to one or more existing Overlay networks to enable service-to-service communication.Overlay networks are Docker networks that use the Overlay network driver.

  • The ingress - ingress network is a special Overlay network that facilitates load balancing between service nodes.When any cluster node receives a request on a published port, it forwards the request to a module named IPVS.IPVS tracks all IP addresses participating in the service through the ingress network, selects one of them, and sends the request to it.
    When swarm init or swarm join is made on a node, the ingress network is automatically created.Most users do not need to customize their configuration, but Docker 17.05 and later allows you to customize it.

  • docker_gwbridge - docker_gwbridge is a bridged network that connects the overlay network (including the ingress network) to the physical network of a single Docker daemon.By default, each container the service is running connects to the docker_gwbridge network of its local Docker daemon host.

The docker_gwbridge network is automatically created when you initialize or join a cluster.Most users do not need to customize their configuration, but Docker allows you to customize it.

Core Implementation Mechanisms of Swarm

Service Discovery Mechanisms

Docker Engine has an embedded DNS server that is used by the container when the Docker is not running in Swarm mode, and is used for tasks when the Docker Engine is running in Swarm mode.It provides name resolution for all containers on the host in a bridge, Overlay, or MACVLAN network.Each container forwards its query request to the Docker engine, which in turn checks whether the container or service is on the same network as the container that sent the request first.If so, it will search its internal key store for IP (or virtual IP) addresses that match the names of containers, tasks, or services and return them to the container that sent the request.

If the matching resource is on the same network as the container that generated the request, the Docker engine will only return the IP address.The advantage of this is that the Docker host only stores DNS entries belonging to the network in which the node has containers or tasks.This means that they will not store information that is actually irrelevant to them or that other containers do not need to know.

In the figure above, there is a custom network named custom-net.There are two services running on the network: myservice and myclient.Myservice has two tasks associated with it, while the client has only one.

The client myclient then executes a curl request to myservice, so it also makes a request to DNS.The container's built-in parser forwards queries to the Docker engine's DNS server.The request for myservice will then be resolved to 10.0.0.2 virtual IP and forwarded back to the client, who can access the container via virtual ip.

Load Balancing Mechanism Routing Mesh

Docker Internal Request Load Balancing

After the service is created, Docker will automatically enable this feature.Therefore, when a service is created, it immediately gets a virtual IP address on the network of the service.As mentioned in the Service Discovery section above, when a service is requested, the resulting DNS query is forwarded to the Docker engine, which in turn returns the IP of the service, virtual IP.The traffic sent to the virtual IP balances the load onto all functioning containers of the service on the network.All load balancing is done by Docker because only one entry point is assigned to the client (one IP).

Docker External Request Load Balancing (Ingress)

By default, load balancing is not activated, and when a service is exposed using the publish flag when it is created or updated, each node in the cluster starts listening for published ports, which means that each node can respond to requests for services mapped to that port.

What happens when a node receives a request but does not have a container instance? Starting with Docker 1.12, which integrates Swarm mode into the same version of Docker Engine, there is a feature called Routing Mesh that uses IP virtual servers (ipvs) and iptables to load balance requests in Layer 4.Basically, IPVS implements load balancing on the Layer 4 Linux kernel, which allows requests for TCP/UDP-based services to be redirected to the actual backend (in this case, a container).In warm-specific scenarios, each node listens for exposed ports and then forwards requests to the VIP (virtual IP) of the exposed service using a special Overlay network called ingress.This Overlay network is used only when external traffic is transferred to the requested service.In this case, docker uses the same internal load balancing strategy as described above.


In the figure above, a service with two copies is created on the appnet Overlay network.We can see that the service is exposed on port 8000 on three nodes, where traffic to the application can be forwarded to any node.Suppose, in this case, there is an external load balancer that just forwards a request to the only node without the service instance, which is handled and forwarded by IPVS on the third node, which uses the ingress network and therefore redirects it to one of the truly functioning containers on the cluster of the service using the load balancing method described above.

Swarm mode operation mechanism


The image above is from an official source and clearly shows how the manage rs and worker s work together in swarm mode, without further details.

project

Actual Target

  • Create a custom overlay network credit-facility-net
  • Set up Swarm Cluster
    • manager-node - 192.168.101.11
    • worker01-node - 192.168.101.12
    • worker03-node - 192.168.101.13
  • Set up quota service cluster, three application examples
    • credit-facility-net:8080
  • Set up Mysql database
    • [Mysql Service]db:3306
  • Set up Nginx services and configure load balancing rules
    • [Nginx Service]nginx:80
  • Create Volume credit-facility-volume to persist Mysql container data
  • Using the characteristics of docker swarm load balancing and service discovery, docker network content servers communicate through container names
  • Access swagger through browser for business operations

Because I don't have enough machine resources, I just created three virtual machines here, and the manager node can deploy services as well.

Set up virtual machine nodes

Set up virtual machine nodes

Vagrant is used here to manage virtual machines. If you don't know what Vagrant is, check Chapter 1.The Vagrant directive can be viewed Vagrant Detailed Instructions Document

1. Create a folder swarm-centos7 on our host (your own computer) and create a Vagrantfile file in the directory

Vagrant

boxes = [ { :name => "manager-node", :eth1 => "192.168.101.11", :mem => "1024", :cpu => "1" }, { :name => "worker01-node", :eth1 => "192.168.101.12", :mem => "1024", :cpu => "1" }, { :name => "worker02-node", :eth1 => "192.168.101.13", :mem => "1024", :cpu => "1" } ] Vagrant.configure(2) do |config| config.vm.box = "centos7" boxes.each do |opts| config.vm.define opts[:name] do |config| config.vm.hostname = opts[:name] config.vm.provider "vmware_fusion" do |v| v.vmx["memsize"] = opts[:mem] v.vmx["numvcpus"] = opts[:cpu] end config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--memory", opts[:mem]] v.customize ["modifyvm", :id, "--cpus", opts[:cpu]] v.customize ["modifyvm", :id, "--name", opts[:name]] end config.vm.network :public_network, ip: opts[:eth1] end end end

The configuration of three virtual machines is specified here, and a static ip 192.168.101.11, 192.168.101.12, and 192.168.101.13 are assigned, respectively, and virtual machines are created in a circular fashion.This specifies the docker swarm minimum requirement to configure 1 CPU, 1G memory.

2. Start three virtual machines, remember to select the network card you can use during the startup process

evans-MacBook-Pro:swarm-centos7 evan$ vagrant up

When the command is executed, three virtual machines will be successfully initialized

Initialize Virtual Secret Code

Since you need the ssh client tool later, you need to open the password here

Modify the manager node access password for later file uploads, where the password is evan123

[root@manager-node local]# passwd Changing password for user root. New password: BAD PASSWORD: The password fails the dictionary check - it is too simplistic/systematic Retype new password: passwd: all authentication tokens updated successfully.

Only the manager node is demonstrated here, and the other two worker nodes need to do the same

Install Docker Environment

To install Docker on each node, use the SSH Client Tool to log on to three virtual machines separately. Follow these steps to install the docker first

1. Uninstall the docker configuration before, if any

sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine

2. Required dependencies for server installation

sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2

3. Configure the Accelerator, because domestic download of docker needs to cross the Great Wall, it will be slower. Here I use my own Ali Cloud Accelerator. If you don't know how to configure it, please check Chapter 1

sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://6xh8u88j.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

4. Set up a Docker warehouse

sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo

5. Install Docker

sudo yum install -y docker-ce docker-ce-cli containerd.io

6. After installation, enter the manger node and the two worker nodes to start the docker service

[root@manager-node ~]# systemctl start docker

7. Verify the docker, enter docker info, and verify that the docker has been installed successfully

Set up swarm cluster

Initialize manager node

This requires first entering the manager node 192.168.101.11 for swarm initialization.

[root@manager-node credit-facility]# docker swarm init --advertise-addr=192.168.101.11 Swarm initialized: current node (tdtu8tl63zqim8jrbzf8l5bcn) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-61e18f81408f4ja2yrcn9l11y5x21okcx58d3f6gcptyydb0iz-9hquv710qpx8s4h88oy2fycei 192.168.101.11:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

A string of token s is generated and then used by the worker node to join the swarm cluster

worker node joins swarm cluster

Here we need to go in two worker nodes, 192.168.101.12 and 192.168.101.13, respectively, to do the following

docker swarm join --token SWMTKN-1-61e18f81408f4ja2yrcn9l11y5x21okcx58d3f6gcptyydb0iz-9hquv710qpx8s4h88oy2fycei 192.168.101.11:2377

This means that the current node joins the swarm cluster

View Current Node

All current node information can only be viewed in the swarm manager node

[root@manager-node credit-facility]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION tdtu8tl63zqim8jrbzf8l5bcn * manager-node Ready Active Leader 19.03.7 n8m358c7ta18206gzey7xsvw8 worker01-node Ready Active 19.03.7 pgzph6ye6xl1p9fz0hif191kn worker02-node Ready Active 19.03.7

Here you can see that all three of our nodes have successfully joined the swarm cluster and manager-node is our swarm cluster leader

Build Project

Create working directory

As in the previous chapter, create a credit-facitliy directory under / usr/local for each of the three nodes of the cluster

[root@worker01-node local]# mkdir credit-facility

Pack and upload jar

  • To make it easier for us to dynamically modify the database configuration later, the database configuration of the quota service is changed to a dynamic variable, as follows:
# for docker-stack demo spring.datasource.url = jdbc:mysql://$:$/$?useUnicode=true&characterEncoding=utf8 #spring.datasource.url = jdbc:mysql://192.168.101.23:3301/db_credit_facility?useUnicode=true&characterEncoding=utf8 #Configuration database user name #spring.datasource.username = root spring.datasource.username = $ #Configure database password #spring.datasource.password = evan123 spring.datasource.password = $

Profile has been modified in advance in the project, you can use it directly

  • Next, continue to package the mvn clean package with maven for the project and package it in the star directory
  • Upload project jar package to manager, worker01, worker02 nodes respectively
evans-MacBook-Pro:target evan$ sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> put start-1.0.0-SNAPSHOT.jar /usr/local/credit-facility Uploading start-1.0.0-SNAPSHOT.jar to /usr/local/credit-facility/start-1.0.0-SNAPSHOT.jar start-1.0.0-SNAPSHOT.jar 100% 43MB 76.5MB/s 00:00 sftp>
evans-MacBook-Pro:target evan$ sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> put start-1.0.0-SNAPSHOT.jar /usr/local/credit-facility Uploading start-1.0.0-SNAPSHOT.jar to /usr/local/credit-facility/start-1.0.0-SNAPSHOT.jar start-1.0.0-SNAPSHOT.jar 100% 43MB 76.5MB/s 00:00 sftp>
evans-MacBook-Pro:target evan$ sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> put start-1.0.0-SNAPSHOT.jar /usr/local/credit-facility Uploading start-1.0.0-SNAPSHOT.jar to /usr/local/credit-facility/start-1.0.0-SNAPSHOT.jar start-1.0.0-SNAPSHOT.jar 100% 43MB 76.5MB/s 00:00 sftp>

Create Quota Service Image

  • Create a Dockerfile under the three nodes/usr/local/credit-facility directory
[root@manager-node credit-facility]# cat Dockerfile FROM openjdk:8-jre-alpine MAINTAINER evan LABEL name="credit-facility" version="1.0" author="evan" COPY start-1.0.0-SNAPSHOT.jar credit-facility-service.jar CMD ["java","-jar","credit-facility-service.jar"]

This file has been pre-placed in the Quota Service Item, just copy it

  • In the / usr/local/credit-facility directory of each of the three nodes, execute the following command to create a quota service image. This demonstrates only the manager node, as the other nodes do
[root@manager-node credit-facility]# docker build -t credit-facility-image . Sending build context to Docker daemon 44.92MB Step 1/5 : FROM openjdk:8-jre-alpine ---> f7a292bbb70c Step 2/5 : MAINTAINER evan ---> Running in 50b0ae0125ef Removing intermediate container 50b0ae0125ef ---> b4231d681d22 Step 3/5 : LABEL name="credit-facility" version="1.0" author="evan" ---> Running in 4a6bb0ae9f12 Removing intermediate container 4a6bb0ae9f12 ---> ea441d121fc4 Step 4/5 : COPY start-1.0.0-SNAPSHOT.jar credit-facility-service.jar ---> 0bed9d9397f6 Step 5/5 : CMD ["java","-jar","credit-facility-service.jar"] ---> Running in 6bb0c14f1a85 Removing intermediate container 6bb0c14f1a85 ---> de2606eea641 Successfully built de2606eea641 Successfully tagged credit-facility-image:latest

Create an existing list of mirrors for each node under View:

[root@worker01-node credit-facility]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE credit-facility-image latest 8dcef5954aaa 3 hours ago 130MB openjdk 8-jre-alpine f7a292bbb70c 10 months ago 84.9MB

As you can see from the query above, our credit service image has been successfully created

Create a Nginx configuration

Because we use the Nginx service later, we need to create the configuration ahead of time and then overwrite the default configuration in the Nginx container with--mount

Under the / usr/local/credit-facility folder, create a nginx directory and create a nginx.conf configuration file under the nginx directory

[root@manager-node nginx]# cat nginx.conf user nginx; worker_processes 1; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; location / { proxy_pass http://balance; } } upstream balance{ server credit-facility-service:8080; } include /etc/nginx/conf.d/*.conf; }

I use docker swam's built-in DNS principle here. I configure the domain name here is credit-facility-service, which will automatically route us to the corresponding service node

Create Compose Profile

  • Under the / usr/local/credit-facility folder of the manager node, create a docker-stack.yml to create and manage services (note that only the manager node is needed here, and the manager node will publish the docker service to other nodes)
[root@manager-node credit-facility]# cat docker-stack.yml version: '3' services: db: restart: always image: mysql build: context: /usr/local/credit-facility ports: - 3306:3306/tcp volumes: - "credit-facility-volume:/var/lib/mysql:rw" environment: - MYSQL_DATABASE=db_credit_facility - MYSQL_ROOT_PASSWORD=evan123 networks: - demo-overlay deploy: mode: global placement: constraints: - node.role == manager credit-facility-service: restart: always image: credit-facility-image build: context: /usr/local/credit-facility ports: - 8080:8080 environment: - DB_HOST=db - DB_PORT=3306 - DB_USER=root - DB_PASSWORD=evan123 - DB_NAME=db_credit_facility networks: - demo-overlay deploy: mode: replicated replicas: 3 restart_policy: condition: on-failure delay: 5s max_attempts: 3 update_config: parallelism: 1 delay: 10s nginx: restart: always depends_on: - db - credit-facility-service image: nginx ports: - "80:80" volumes: - /usr/local/credit-facility/nginx/nginx.conf:/etc/nginx/nginx.conf networks: - demo-overlay deploy: mode: global placement: constraints: - node.role == manager networks: demo-overlay: driver: overlay volumes: credit-facility-volume: {}

Publish services to Docker Swarm

  • Using docker stack to publish services, you can publish services to each node of the cluster (docker compose is for single-machine deployment, and docker swarm stack is required for multiple-machine deployment)
[root@manager-node credit-facility]# docker stack deploy -c docker-stack.yml web Ignoring unsupported options: build, restart Ignoring deprecated options: container_name: Setting the container name is not supported. Creating network web_credit-facility-net Creating service web_nginx Creating service web_db Creating service web_credit-facility-service ...
  • View service status
[root@manager-node nginx]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS 3qcjnj7n5dkk web_credit-facility-service replicated 3/3 credit-facility-image:latest *:8080->8080/tcp t5omqvum57ag web_db global 1/1 mysql:latest *:3306->3306/tcp w89fkne6fzcg web_nginx global 1/1 nginx:latest *:80->80/tcp

As you can see from the above results, each service we publish has been successfully started and three instances of credit service have been successfully published to three nodes

  • See which node each service instance falls on for the following credits service
[root@manager-node nginx]# docker service ps web_credit-facility-service ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS pc32kfmfxke0 web_credit-facility-service.1 credit-facility-image:latest worker01-node Running Running 23 minutes ago 8v9efe61p5wb web_credit-facility-service.2 credit-facility-image:latest manager-node Running Running 23 minutes ago sg1wh95lxyca web_credit-facility-service.3 credit-facility-image:latest worker02-node Running Running 23 minutes ago

The above results clearly show which node each instance is published to

Initialize database configuration

As in the previous section, because the quota service depends on the database, you need to initialize the tables used by the quota service, use the Mysql client, connect to the database, put the table creation statement of resources/db in the project into execution, I use Navicat to connect here, the database connection is 192.168.101.11:3306

Verify catalog files

At this point, all the configurations have been completed. Let's check the existing files to see if you missed any bad configurations.

Under the credit-facility folder of the manager node, there should be the following files

[root@manager-node credit-facility]# ls docker-compose.yaml Dockerfile docker-stack.yml nginx start-1.0.0-SNAPSHOT.jar

Under the credit-facility folder of the two worker nodes, there should be the following files

worker-01 node

[root@worker01-node credit-facility]# ls Dockerfile start-1.0.0-SNAPSHOT.jar

worker-02 node

[root@worker02-node credit-facility]# ls Dockerfile start-1.0.0-SNAPSHOT.jar

Validation Service

  • Verify Nginx Service - If the Nginx proxy rule is OK, you should enter 192.168.101.11http://192.168.101.11/swagger-ui.html to access our Quota Service

    As you can see from the above, our Nginx has worked successfully to proxy our Quota Service Cluster
  • Verify Credit Services per node - If docker swarm is functioning properly, we should be able to access our services through ip+8080 on three nodes separately



    As you can see from the above display, I have successfully accessed our quota service through ip and port 8080 of the three nodes, proving that our swarm cluster has been successfully set up

Validate Quota Service Function

Since our quota service is useful for databases, here we will try to invoke the interface of the quota service to see if it can be successfully repositored.

The request parameters are as follows:

{ "registrationLimitCO": { "applicationId": "1111", "userId": 1111, "quotaLimit": 10000, "productCode": "tb", "expirationTime": "2030-01-01", "accountType": 1 } }

Execute Quota Registration Interface

Execution results:

As you can see from the above execution results, our interface has successfully processed our request and repositioned successfully. Interested students can go to the database to view the corresponding records

appendix

Reference Reference

Official Document - Swarm Networking

Common directives for Docker Swarm

Docker Stack

  • View stack details
[root@manager-node credit-facility]# docker stack ls NAME SERVICES ORCHESTRATOR web 3 Swarm
  • Create a service from docker-stack.yml
docker statck deploy -c docker-stack.yml web
  • View a service
[root@manager-node credit-facility]# docker service inspect web_db ... "Endpoint": { "Spec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 3306, "PublishedPort": 3306, "PublishMode": "ingress" } ] }, "Ports": [ { "Protocol": "tcp", "TargetPort": 3306, "PublishedPort": 3306, "PublishMode": "ingress" } ], "VirtualIPs": [ { "NetworkID": "5mmql4cfhoac6q3y67wm4x2uh", "Addr": "10.0.0.122/24" }, { "NetworkID": "xx3b6lki8n1nvkphffretc050", "Addr": "10.0.6.12/24" } ] } ...

Docker Service

  • Create a service for nginx
docker service create --name my-nginx nginx
  • View service for current swarm
docker service ls
  • View service startup log
docker service logs my-nginx
  • View service details
ocker service inspect my-nginx
  • See which node my-nginx is running on
docker service ps my-nginx
  • Horizontal expansion service
docker service scale my-nginx=3 docker service ls docker service ps my-nginx
  • Delete service
docker service rm my-nginx
Evan_Leung Blog Specialist 121 original articles were published. 330 were praised. 400,000 visits+ Private letter follow

16 March 2020, 21:15 | Views: 9797

Add new comment

For adding a comment, please log in
or create account

0 comments