Docker-swarm for Container Technology

In the previous section, I talked about the simple use and basic principles of docker machine, refer back to https://www...

In the previous section, I talked about the simple use and basic principles of docker machine, refer back to https://www.cnblogs.com/qiuhom-1874/p/13160915.html ; Today let's talk about docker cluster management tool docker swarm; docker swarm is the official cluster management tool for docker, which allows cross-host nodes to create and manage docker clusters; its main function is to integrate the docker environment of multiple node hosts into a large docker resource pool; docker swarm is for this large docker resource poolResource pools manage containers on top of them; in the past, we only created and managed containers on a single host, but in production environments, containers on a single physical machine are really not enough to meet the needs of the current business, so docker swarm provides a cluster solution for creating and managing containers on multiple nodes; let's take a look at docker nextSwarm cluster building process;

The docker swarm was already installed when we installed the docker, so we can view it using docker info

[root@node1 ~]# docker info Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 19.03.11 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-693.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 3.686GiB Name: docker-node01 ID: 4HXP:YJ5W:4SM5:NAPM:NXPZ:QFIU:ARVJ:BYDG:KVWU:5AAJ:77GC:X7GQ Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: provider=generic Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false [root@node1 ~]#

Tip: From the information above, you can see that swarm is inactive because we have not initialized the cluster yet, so the value of the corresponding swarm option is inactive.

Initialize Cluster

[root@docker-node01 ~]# docker swarm init --advertise-addr 192.168.0.41 Swarm initialized: current node (ynz304mbltxx10v3i15ldkmj1) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-6difxlq3wc8emlwxzuw95gp8rmvbz2oq62kux3as0e4rbyqhk3-2m9x12n102ca4qlyjpseobzik 192.168.0.41:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. [root@docker-node01 ~]#

Tip: From the feedback above, you can see that the cluster initialization was successful and tell us that the current node is the management node. If you want other nodes to join the cluster, you can run docker swarm join --token SWMTKN-1-6difxlq3wc8emlwxzuw95gp8rmvbz2oq62kux3as0e4rbyqhk3-2m9x12nca4qlyjpseobzik 192.168.0.41:2377 on the corresponding nodeThis command joins the cluster as a work node. If you want to join the cluster as an administrative node, you need to run the docker swarm join-token manager command on the current terminal

[root@docker-node01 ~]# docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-6difxlq3wc8emlwxzuw95gp8rmvbz2oq62kux3as0e4rbyqhk3-dqjeh8hp6cp99bksjc03b8yu3 192.168.0.41:2377 [root@docker-node01 ~]#

Tip: We execute the docker swarm join-token manager command, which returns a command and tells us to add a management node and execute the docker swarm join --token SWMTKN-1-6difxlq3wc8emlwxzuw95gp8rmvbz2oq62kux3as0e4rbyqhk3-dqjeh8p6cp99bksjc038yu3 192.168.0:2377 command on the corresponding node;

The docker swarm cluster is initialized, so let's join the other nodes to the cluster

Join docker-node02 as a work node in the cluster

[root@node2 ~]# docker swarm join --token SWMTKN-1-6difxlq3wc8emlwxzuw95gp8rmvbz2oq62kux3as0e4rbyqhk3-2m9x12n102ca4qlyjpseobzik 192.168.0.41:2377 This node joined a swarm as a worker. [root@node2 ~]#

Tip: Joining the cluster is successful without error; we can use docker info to view current docker environment details

Tip: From the above information, you can see that the docker swarm is activated on this host docker-node02 and you can see the address of the management node; in addition to the above method, you can determine docker-node02 and join the cluster; we can also run docker node ls on the management node to view the cluster node information;

View cluster node information

Tip: Run docker node ls on the management node to list how many nodes in the current cluster have successfully joined;

Join docker-node03 as an administrative node in the cluster

Tip: You can see that docker-node03 is already the management node of the cluster, so you can execute the docker node ls command at the node docker-node03; the docker swarm cluster is now set up; let's talk about the common management of the docker swarm cluster

- related node management commands

Docker nodes: List all nodes on the current cluster

[root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 aeo8j7zit9qkoeeft3j0q1h0z docker-node03 Ready Active Reachable 19.03.11 [root@docker-node01 ~]#

Tip: This command can only be executed on management nodes;

docker node inspect: View the details of the specified node;

[root@docker-node01 ~]# docker node inspect docker-node01 [ { "ID": "ynz304mbltxx10v3i15ldkmj1", "Version": { "Index": 9 }, "CreatedAt": "2020-06-20T05:57:17.57684293Z", "UpdatedAt": "2020-06-20T05:57:18.18575648Z", "Spec": { "Labels": {}, "Role": "manager", "Availability": "active" }, "Description": { "Hostname": "docker-node01", "Platform": { "Architecture": "x86_64", "OS": "linux" }, "Resources": { "NanoCPUs": 4000000000, "MemoryBytes": 3958075392 }, "Engine": { "EngineVersion": "19.03.11", "Labels": { "provider": "generic" }, "Plugins": [ { "Type": "Log", "Name": "awslogs" }, { "Type": "Log", "Name": "fluentd" }, { "Type": "Log", "Name": "gcplogs" }, { "Type": "Log", "Name": "gelf" }, { "Type": "Log", "Name": "journald" }, { "Type": "Log", "Name": "json-file" }, { "Type": "Log", "Name": "local" }, { "Type": "Log", "Name": "logentries" }, { "Type": "Log", "Name": "splunk" }, { "Type": "Log", "Name": "syslog" }, { "Type": "Network", "Name": "bridge" }, { "Type": "Network", "Name": "host" }, { "Type": "Network", "Name": "ipvlan" }, { "Type": "Network", "Name": "macvlan" }, { "Type": "Network", "Name": "null" }, { "Type": "Network", "Name": "overlay" }, { "Type": "Volume", "Name": "local" } ] }, "TLSInfo": { "TrustRoot": "-----BEGIN CERTIFICATE-----\nMIIBaTCCARCgAwIBAgIUeBd/eSZ7WaiyLby9o1yWpjps3gwwCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjAwNjIwMDU1MjAwWhcNNDAwNjE1MDU1\nMjAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABMsYxnGoPbM4gqb23E1TvOeQcLcY56XysLuF8tYKm56GuKpeD/SqXrUCYqKZ\nHV+WSqcM0fD1g+mgZwlUwFzNxhajQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBTV64kbvS83eRHyI6hdJeEIv3GmrTAKBggqhkjO\nPQQDAgNHADBEAiBBB4hLn0ijybJWH5j5rtMdAoj8l/6M3PXERnRSlhbcawIgLoby\newMHCnm8IIrUGe7s4CZ07iHG477punuPMKDgqJ0=\n-----END CERTIFICATE-----\n", "CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh", "CertIssuerPublicKey": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEyxjGcag9sziCpvbcTVO855BwtxjnpfKwu4Xy1gqbnoa4ql4P9KpetQJiopkdX5ZKpwzR8PWD6aBnCVTAXM3GFg==" } }, "Status": { "State": "ready", "Addr": "192.168.0.41" }, "ManagerStatus": { "Leader": true, "Reachability": "reachable", "Addr": "192.168.0.41:2377" } } ] [root@docker-node01 ~]#

docker node ps: Lists the running containers on the specified node

[root@docker-node01 ~]# docker node ps ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS [root@docker-node01 ~]# docker node ps docker-node01 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS [root@docker-node01 ~]#

Tip: Like the docker ps command, I don't have a running container on it, so I can't see the corresponding information; by default, not specifying a node name means viewing the list of running containers on the current node;

docker node rm: Delete the specified node

[root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 aeo8j7zit9qkoeeft3j0q1h0z docker-node03 Ready Active Reachable 19.03.11 [root@docker-node01 ~]# docker node rm docker-node03 Error response from daemon: rpc error: code = FailedPrecondition desc = node aeo8j7zit9qkoeeft3j0q1h0z is a cluster manager and is a member of the raft cluster. It must be demoted to worker before removal [root@docker-node01 ~]# docker node rm docker-node02 Error response from daemon: rpc error: code = FailedPrecondition desc = node tzkm0ymzjdmc1r8d54snievf1 is not down and can't be removed [root@docker-node01 ~]#

Tip: Before deleting a node, it must be satisfied. The deleted node is not a management node, and then the node to be deleted must be in the down state.

docker swarm leave: leaving the current cluster

[root@docker-node03 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e7958ffa16cd nginx "/docker-entrypoint...." 28 seconds ago Up 26 seconds 80/tcp n1 [root@docker-node03 ~]# docker swarm leave Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. Removing this node leaves 1 managers out of 2. Without a Raft quorum your swarm will be inaccessible. The only way to restore a swarm that has lost consensus is to reinitialize it with `--force-new-cluster`. Use `--force` to suppress this message. [root@docker-node03 ~]# docker swarm leave -f Node left the swarm. [root@docker-node03 ~]#

Tip: The management node is not allowed to leave the cluster by default. If -f option is forced to leave the cluster, it will cause other management nodes to fail to manage the cluster properly.

[root@docker-node01 ~]# docker node ls Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online. [root@docker-node01 ~]#

Tip: We can no longer view the list of cluster nodes using docker nodes on docker-node01; the solution is to reinitialize the cluster;

[root@docker-node01 ~]# docker node ls Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online. [root@docker-node01 ~]# docker swarm init --advertise-addr 192.168.0.41 Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one. [root@docker-node01 ~]# docker swarm init --force-new-cluster Swarm initialized: current node (ynz304mbltxx10v3i15ldkmj1) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-6difxlq3wc8emlwxzuw95gp8rmvbz2oq62kux3as0e4rbyqhk3-2m9x12n102ca4qlyjpseobzik 192.168.0.41:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. [root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Unknown Active 19.03.11 aeo8j7zit9qkoeeft3j0q1h0z docker-node03 Down Active 19.03.11 rm3j7cjvmoa35yy8ckuzoay46 docker-node03 Unknown Active 19.03.11 [root@docker-node01 ~]#

Tip: A reinitialized cluster cannot be initialized using docker swarm init--advertise-addr 192.168.0.41. It must be initialized using docker swarm init --force-new-cluster, which means to force a cluster to be created from its current state; now we can use docker node rm to remove nodes from the down state from the cluster;

Delete down state nodes

[root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 aeo8j7zit9qkoeeft3j0q1h0z docker-node03 Down Active 19.03.11 rm3j7cjvmoa35yy8ckuzoay46 docker-node03 Down Active 19.03.11 [root@docker-node01 ~]# docker node rm aeo8j7zit9qkoeeft3j0q1h0z rm3j7cjvmoa35yy8ckuzoay46 aeo8j7zit9qkoeeft3j0q1h0z rm3j7cjvmoa35yy8ckuzoay46 [root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 [root@docker-node01 ~]#

docker node promote: Promote the specified node to the management node

[root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 [root@docker-node01 ~]# docker node promote docker-node02 Node docker-node02 promoted to a manager in the swarm. [root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active Reachable 19.03.11 [root@docker-node01 ~]#

docker node demote: demote the specified node to a work node

[root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active Reachable 19.03.11 [root@docker-node01 ~]# docker node demote docker-node02 Manager docker-node02 demoted in the swarm. [root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 [root@docker-node01 ~]#

docker node update: update specified node

[root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Active Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 [root@docker-node01 ~]# docker node update docker-node01 --availability drain docker-node01 [root@docker-node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ynz304mbltxx10v3i15ldkmj1 * docker-node01 Ready Drain Leader 19.03.11 tzkm0ymzjdmc1r8d54snievf1 docker-node02 Ready Active 19.03.11 [root@docker-node01 ~]#

Tip: The above command changes the availability property of docker-node01 to drain so that the resources of docker-node01 will not be scheduled to run the container after the change;

Add GUI for docker swarm cluster

[root@docker-node01 docker]# docker run --name v1 -d -p 8888:8080 -e HOST=192.168.0.41 -e PORT=8080 -v /var/run/docker.sock:/var/run/docker.sock docker-registry.io/test/visualizer Unable to find image 'docker-registry.io/test/visualizer:latest' locally latest: Pulling from test/visualizer cd784148e348: Pull complete f6268ae5d1d7: Pull complete 97eb9028b14b: Pull complete 9975a7a2a3d1: Pull complete ba903e5e6801: Pull complete 7f034edb1086: Pull complete cd5dbf77b483: Pull complete 5e7311667ddb: Pull complete 687c1072bfcb: Pull complete aa18e5d3472c: Pull complete a3da1957bd6b: Pull complete e42dbf1c67c4: Pull complete 5a18b01011d2: Pull complete Digest: sha256:54d65cbcbff52ee7d789cd285fbe68f07a46e3419c8fcded437af4c616915c85 Status: Downloaded newer image for docker-registry.io/test/visualizer:latest 3c15b186ff51848130393944e09a427bd40d2504c54614f93e28477a4961f8b6 [root@docker-node01 docker]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3c15b186ff51 docker-registry.io/test/visualizer "npm start" 6 seconds ago Up 5 seconds (health: starting) 0.0.0.0:8888->8080/tcp v1 [root@docker-node01 docker]#

Tip: The command above is a mirror downloaded from a private warehouse, because Internet downloads are too slow, so I downloaded it in advance and put it in a private warehouse. For the construction and use of a private warehouse, please refer to https://www.cnblogs.com/qiuhom-1874/p/13061984.html perhaps https://www.cnblogs.com/qiuhom-1874/p/13058338.html ; after running the visualizer container on the management node, we can directly access port 8888 of the management node's address and see the current container; as shown below

Tip: From the above information, you can see that the current cluster has one management node and two work nodes; there are no containers running in the current cluster;

Run the service in docker swarm

[root@docker-node01 ~]# docker service create --name myweb docker-registry.io/test/nginx:latest i0j6wvvtfe1360ibj04jxulmd overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged [root@docker-node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS i0j6wvvtfe13 myweb replicated 1/1 docker-registry.io/test/nginx:latest [root@docker-node01 ~]# docker service ps myweb ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 99y8towew77e myweb.1 docker-registry.io/test/nginx:latest docker-node03 Running Running 1 minutes ago [root@docker-node01 ~]#

Tip: docker service creation means to create a service in the current swarm cluster environment; the above command means to create a service named myweb on the swarm cluster, using docker-registry.io/test/nginx:latest mirror; only one copy is started by default;

Tip: You can see that a container for myweb is running in the current cluster and running on the docker-node03 host.

Create multiple replica services on swarm cluster

[root@docker-node01 ~]# docker service create --replicas 3 --name web docker-registry.io/test/nginx:latest mbiap412jyugfpi4a38mb5i1k overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service converged [root@docker-node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS i0j6wvvtfe13 myweb replicated 1/1 docker-registry.io/test/nginx:latest mbiap412jyug web replicated 3/3 docker-registry.io/test/nginx:latest [root@docker-node01 ~]#docker service ps web ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 1rt0e7u4senz web.1 docker-registry.io/test/nginx:latest docker-node02 Running Running 28 seconds ago 31ll0zu7udld web.2 docker-registry.io/test/nginx:latest docker-node02 Running Running 28 seconds ago l9jtbswl2x22 web.3 docker-registry.io/test/nginx:latest docker-node03 Running Running 32 seconds ago [root@docker-node01 ~]#

Tip: - The replicas option is used to specify the number of copies that you want to run. This option creates the number of copies that we specify on the cluster, and it always creates the number of containers that we specify to run on the cluster, even if the nodes in our cluster are down.

Testing: Shut down docker-node03 to see if the services we're running will migrate to Node 2?

Before docker-node03 shuts down

After docker-node03 shuts down

Tip: From the above screenshot, you can see that when Node 3 goes down, all containers running on Node 3 will be migrated to Node 2. This is the function of the--replicas option when creating containers. To summarize, the replicas mode is used to create services, which will migrate services on the corresponding node to other nodes if the service fails; here is a reminder that:As long as the service replicas on the cluster meet the number of replicas we specified, it will not migrate the service back even if the failed node recovers.

[root@docker-node01 ~]# docker service ps web ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 1rt0e7u4senz web.1 docker-registry.io/test/nginx:latest docker-node02 Running Running 15 minutes ago 31ll0zu7udld web.2 docker-registry.io/test/nginx:latest docker-node02 Running Running 15 minutes ago t3gjvsgtpuql web.3 docker-registry.io/test/nginx:latest docker-node02 Running Running 6 minutes ago l9jtbswl2x22 \_ web.3 docker-registry.io/test/nginx:latest docker-node03 Shutdown Shutdown 23 seconds ago [root@docker-node01 ~]#

Tip: When we look at the list of services at the management node, we can see that migrating a service means stopping the replica on the corresponding node and creating a new replica at another node.

Services Scaling

[root@docker-node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS i0j6wvvtfe13 myweb replicated 1/1 docker-registry.io/test/nginx:latest mbiap412jyug web replicated 3/3 docker-registry.io/test/nginx:latest [root@docker-node01 ~]# docker service scale myweb=3 web=5 myweb scaled to 3 web scaled to 5 overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service converged overall progress: 5 out of 5 tasks 1/5: running [==================================================>] 2/5: running [==================================================>] 3/5: running [==================================================>] 4/5: running [==================================================>] 5/5: running [==================================================>] verify: Service converged [root@docker-node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS i0j6wvvtfe13 myweb replicated 3/3 docker-registry.io/test/nginx:latest mbiap412jyug web replicated 5/5 docker-registry.io/test/nginx:latest [root@docker-node01 ~]# docker service ps myweb web ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS j7w490h2lons myweb.1 docker-registry.io/test/nginx:latest docker-node02 Running Running 12 minutes ago 1rt0e7u4senz web.1 docker-registry.io/test/nginx:latest docker-node02 Running Running 21 minutes ago 99y8towew77e myweb.1 docker-registry.io/test/nginx:latest docker-node03 Shutdown Shutdown 5 minutes ago en5rk0jf09wu myweb.2 docker-registry.io/test/nginx:latest docker-node03 Running Running 31 seconds ago 31ll0zu7udld web.2 docker-registry.io/test/nginx:latest docker-node02 Running Running 21 minutes ago h1hze7h819ca myweb.3 docker-registry.io/test/nginx:latest docker-node03 Running Running 30 seconds ago t3gjvsgtpuql web.3 docker-registry.io/test/nginx:latest docker-node02 Running Running 12 minutes ago l9jtbswl2x22 \_ web.3 docker-registry.io/test/nginx:latest docker-node03 Shutdown Shutdown 5 minutes ago od3ti2ixpsgc web.4 docker-registry.io/test/nginx:latest docker-node03 Running Running 31 seconds ago n1vur8wbmkgz web.5 docker-registry.io/test/nginx:latest docker-node03 Running Running 31 seconds ago [root@docker-node01 ~]#

Tip: The docker service scale command is used to specify the number of copies of a service for dynamic scaling;

- Service Exposure

[root@docker-node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS i0j6wvvtfe13 myweb replicated 3/3 docker-registry.io/test/nginx:latest mbiap412jyug web replicated 5/5 docker-registry.io/test/nginx:latest [root@docker-node01 ~]# docker service update --publish-add 80:80 myweb myweb overall progress: 3 out of 3 tasks 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service converged [root@docker-node01 ~]#

Tip: Service exposure in the docker swarm cluster works the same as port exposure in the docker, both through iptables rule tables or LVS rules.

Tip: We can see that the corresponding 80 ports are already listening on the management node, and there is an additional entry in the iptables rule table to access the native 80 ports, all of which have DNA T to 172.18.0.2 on 80; in fact, not only the management node, but also the corresponding iptables rules on the work node have changed as follows;

Tip: From the above rules, when we visit port 80 of a node's address, we will all have DNA T to 80 of 172.18.0.2;

Tip: From the results shown above, it is not difficult to know that the internal address of the myweb container running on docker-node02 is 10.0.0.7, so why do we access 172.18.0.2 as a service that can access the inside of the container?The reason is that 10.0.0.7 is an ingress network, the effective range is the container in swarm cluster, the type is overlay overlay overlay network; simply speaking, the ingress network is a virtual network, the network for true communication is docker_gwbridge network, while docker_The gwbridge network is a bridged network with a local effective range, so we access 80 of the node's address and forward it to docker_through the iptables ruleDocker_on gwbridge networkGwbridge forwards docker_through the kernelTraffic on the gwbridge is forwarded to the ingress network to access the container that actually provides the service; on the management node, the container service is accessed because traffic is forwarded to docker_through iptables rules by accessing 80 on the local machineGwbridge, docker_gwbridge forwards traffic to the Ingres network through the kernel, because ingress is valid for the entire swarm, which means that the management node and the work node share a swarm's network space, so access to the 80 ports of the management node, in fact, through iptables rules and kernel forwarding, will eventually send traffic to a container on the work node to access the service;

Testing: Visit 80 services of the management node to see if you can access the pages provided by nginx?

[root@docker-node02 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b829991d6966 docker-registry.io/test/nginx:latest "/docker-entrypoint...." About an hour ago Up About an hour 80/tcp myweb.1.ilhkslrlnreyo6xx5j2h9isjb 8c2965fbdc27 docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp web.2.pthe8da2n45i06oee4n7h4krd b019d663e48e docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp web.3.w26gqpoyysgplm7qwhjbgisiv a7c1afd76f1f docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp web.1.ho0d7u3wensl0kah0ioz1lpk5 [root@docker-node02 ~]# docker exec -it myweb.1.ilhkslrlnreyo6xx5j2h9isjb bash root@b829991d6966:/# cd /usr/share/nginx/html/ root@b829991d6966:/usr/share/nginx/html# ls 50x.html index.html root@b829991d6966:/usr/share/nginx/html# echo "this is docker-node02 index page" >index.html root@b829991d6966:/usr/share/nginx/html# cat index.html this is docker-node02 index page root@b829991d6966:/usr/share/nginx/html#

Tip: The above is a modification to the home page of the running nginx container on the docker-node02 node. Next, let's visit port 80 of the management node to see if we can access the containers on the obtained work node and what will happen to them?Is it polling?Or do you always access a container?

Tip: You can see that we access the 80 ports of the management node and poll the containers on the work node; we can use the curl command to test for possible caching problems with the browser; as follows:

[root@docker-node03 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f43fdb9ec7fc docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp myweb.3.pgdjutofb5thlk02aj7387oj0 4470785f3d00 docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp myweb.2.uwxbe182qzq00qgfc7odcmx87 7493dcac95ba docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp web.4.rix50fhlmg6m9txw9urk66gvw 118880d300f4 docker-registry.io/test/nginx:latest "/docker-entrypoint...." 2 hours ago Up 2 hours 80/tcp web.5.vo7c7vjgpf92b0ryelb7eque0 [root@docker-node03 ~]# docker exec -it myweb.2.uwxbe182qzq00qgfc7odcmx87 bash root@4470785f3d00:/# cd /usr/share/nginx/html/ root@4470785f3d00:/usr/share/nginx/html# echo "this is myweb.2 index page" > index.html root@4470785f3d00:/usr/share/nginx/html# cat index.html this is myweb.2 index page root@4470785f3d00:/usr/share/nginx/html# exit exit [root@docker-node03 ~]# docker exec -it myweb.3.pgdjutofb5thlk02aj7387oj0 bash root@f43fdb9ec7fc:/# cd /usr/share/nginx/html/ root@f43fdb9ec7fc:/usr/share/nginx/html# echo "this is myweb.3 index page" >index.html root@f43fdb9ec7fc:/usr/share/nginx/html# cat index.html this is myweb.3 index page root@f43fdb9ec7fc:/usr/share/nginx/html# exit exit [root@docker-node03 ~]#

Tip: For easy access and visible results, we changed both the home page of myweb.2 and myweb.3

[root@docker-node01 ~]# for i in ; do curl 192.168.0.41; done this is myweb.3 index page this is docker-node02 index page this is myweb.2 index page this is myweb.3 index page this is docker-node02 index page this is myweb.2 index page this is myweb.3 index page this is docker-node02 index page this is myweb.2 index page this is myweb.3 index page [root@docker-node01 ~]#

Tip: With the above test, when we expose a service using --publish-add, we create a load balance on the management node.

20 June 2020, 12:53 | Views: 2186

Add new comment

For adding a comment, please log in
or create account

0 comments