In the previous blog, we mainly talked about the description of docker image and how to make image based on existing image and distribute image to docker warehouse. For review, please refer to https://www.cnblogs.com/qiuhom-1874/p/12941508.html Today, let's talk about the description of docker's network;
When using vm virtual machine, we know that a virtual machine can have three kinds of virtual network interfaces, the first one is bridging network, the second one is NAT network, and the third one is only host network; behind these three kinds of virtual network interfaces are corresponding to different virtual networks; if we want the virtual machine to work in that network, we need to replace the corresponding interface with that connection Compared with docker, there are three kinds of virtual network interfaces in docker: bridge, host, There are three types of network: none. Bridge is the default network type of docker container. When the startup container does not specify a network, it is bridge by default. This network type is bridged to the docker0 bridge of the host computer, while the docker0 bridge is a NAT bridge. Host is not only the host network type in docker, which means sharing the host computer network, that is, sharing the same network namespace with the host computer. None means sharing the host computer network In docker's network, the form of empty network type is that we start the container to specify that the network type is none, and there is no other network interface except lo interface inside the container, which means that the container network can only communicate with itself, which is similar to the host only network in vm. In fact, in addition to the above three types of networks, docker also supports customized networks. Customized networks are You can create a network namespace. The network types supported by docker are bridge host ipvlan macvlan null overlay;
Tip: the figure above describes the network of the two containers. By default, starting the docker container is a bridge network, bridging to the docker0 bridge. For the docker container, the networks between the containers are isolated from each other. Mutual isolation means listening to a port in the first container, which is invisible to the second container. For the host computer, the networks of the two containers are all networks sharing the docker0 bridge. At the same time, in the host computer In the network namespace on the machine, you can see that the virtual interface of the two containers is connected to the docker0 bridge; therefore, by default, the containers started can communicate with each other based on the docker0; we can understand the docker0 bridge as a switch connecting the two containers;
Tip: you can see that there are two containers running on the host computer, and their virtual network interfaces are all chained in the docker0 bridge; therefore, the two containers can communicate with each other; but for hosts other than the host computer, they are not visible; hosts other than the host computer cannot communicate with these two containers, because the docker is a nat bridge; as shown below
Tip: the above rules indicate that messages with source address 172.17.0.0/16 and do not go out from the docker0 bridge will do address camouflage (SNAT) when passing through the posting chain; from the above information, we can confirm that the docker0 bridge is a nat bridge;
Tip: the above picture shows three kinds of docker networks. The closed container has only one lo interface, which is isolated from other containers and can only communicate with itself. bridged container, a bridging network container, usually has two interfaces, one is the lo loop interface, and the other is the Ethernet interface connected to the docker0 bridge on the host computer. docker daemon At startup, a network bridge named docker0 will be created by default, and the created container is a bridging container (that is, the network of the created container is bridged to the docker0). When using docker to run the container, we can use -- net or -- network to specify the container's network, and the default is -- net Bridge; docker0 bridge is a nat bridge, so bridge containers can access the external network through their own bridge interface, but firewall rules organize all requests to access bridge containers from the external network. For open container s, in fact, they share the network namespace of the host, which is equivalent to not isolating the host resources on the network layer Container is a network resource linked to other containers, which is essentially a network space shared by multiple containers;
Example: container creation for a bridge network
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@node1 ~]# docker run --name web1 -d --net bridge linux1874/myimg:v0.1 bced86e3dcb2e8144ade7cfc5b8c0dcfb5ee0af5e9d729904c7eb971d9c1117f [root@node1 ~]# docker run --name web2 -d --net bridge linux1874/myimg:v0.1 d39dea20d5ea0b52c42c67a8a9a36dcbc2ef78d654dfb9e1d88013306f2f5a50 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d39dea20d5ea linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 6 seconds ago Up 5 seconds web2 bced86e3dcb2 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 11 seconds ago Up 11 seconds web1 [root@node1 ~]# docker container inspect myweb1 -f {{.NetworkSettings.Networks.bridge.NetworkID}} Error: No such container: myweb1 [root@node1 ~]# docker container inspect web1 -f {{.NetworkSettings.Networks.bridge.NetworkID}} 2ae9aa6566561e8f1f5c98f4877b34b29a14ed8dd645933926b0615e9e0b2567 [root@node1 ~]# docker container inspect web2 -f {{.NetworkSettings.Networks.bridge.NetworkID}} 2ae9aa6566561e8f1f5c98f4877b34b29a14ed8dd645933926b0615e9e0b2567 [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 2ae9aa656656 bridge bridge local 93347fb33d89 host host local a99b876eee4d none null local [root@node1 ~]# docker container exec web1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever [root@node1 ~]# docker container exec web2 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever [root@node1 ~]# docker container exec web2 wget -O - -q 172.17.0.2 this test file [root@node1 ~]# docker container exec web1 wget -O - -q 172.17.0.3 this test file
Tip: the containers of bridging network can communicate through docker0 bridge; we can understand that the network on the same bridge is a network, and their network IDs are the same; the network IDs of the above two containers are the bridge IDS;
Example: creating a none network container
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d39dea20d5ea linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 8 minutes ago Up 8 minutes web2 bced86e3dcb2 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 9 minutes ago Up 9 minutes web1 [root@node1 ~]# docker run --name web3 -d --net none linux1874/myimg:v0.1 a0039645d551ef7089d7ff0e588864da48c68a0b301e28728e48d0b520a006ba [root@node1 ~]# docker container inspect web3 -f {{.NetworkSettings.Networks.none.NetworkID}} a99b876eee4d028f6d6410749ac863915711482e6f55aaf38071aabaae53ee2e [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 2ae9aa656656 bridge bridge local 93347fb33d89 host host local a99b876eee4d none null local [root@node1 ~]# docker container exec web3 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever [root@node1 ~]#
Tip: you can see that the container network ID of the none network is the same as that of the none network, and there is only one lo interface inside;
Example: creating a shared network container
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a0039645d551 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 2 minutes ago Up 2 minutes web3 d39dea20d5ea linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 12 minutes ago Up 12 minutes web2 bced86e3dcb2 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 12 minutes ago Up 12 minutes web1 [root@node1 ~]# docker run --name web4 -d --net host linux1874/myimg:v0.1 f2dc3d0bb6992b6f0f93b3db50d952574c45473911a7133b735ae1b633855962 [root@node1 ~]# docker container inspect web4 -f {{.NetworkSettings.Networks.host.NetworkID}} 93347fb33d89a5abd7781ab1a01b51ad46dde4b2409b88d3d5f2e4a7bd0aed2e [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 2ae9aa656656 bridge bridge local 93347fb33d89 host host local a99b876eee4d none null local [root@node1 ~]# docker container exec ip a Error: No such container: ip [root@node1 ~]# docker container exec web4 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:2e:06:0b brd ff:ff:ff:ff:ff:ff inet 192.168.0.31/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe2e:60b/64 scope link valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue link/ether 02:42:63:bb:cd:b4 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:63ff:febb:cdb4/64 scope link valid_lft forever preferred_lft forever 11: vethffac5a0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 link/ether 82:b0:23:49:92:de brd ff:ff:ff:ff:ff:ff inet6 fe80::80b0:23ff:fe49:92de/64 scope link valid_lft forever preferred_lft forever 13: veth9e7a8c4@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 link/ether a6:d3:a2:c9:9f:45 brd ff:ff:ff:ff:ff:ff inet6 fe80::a4d3:a2ff:fec9:9f45/64 scope link valid_lft forever preferred_lft forever [root@node1 ~]# docker container exec web4 hostname node1 [root@node1 ~]# hostname node1 [root@node1 ~]#
Tip: the host network is the network namespace of the shared host, so the network interface we see inside the container is the same as the network interface on the host;
Example: Federated network container creation
[root@node1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE linux1874/myimg v0.1 e408b1c6e04f 29 hours ago 1.22MB busybox latest 78096d0a5478 10 days ago 1.22MB centos 7 b5b4d78bc90c 2 weeks ago 203MB nginx stable-alpine ab94f84cc474 4 weeks ago 21.3MB [root@node1 ~]# docker run --name b1 --net container:web4 -it busybox / # [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2b7517adb2ad busybox "sh" 13 seconds ago Up 13 seconds b1 f2dc3d0bb699 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 22 minutes ago Up 22 minutes web4 a0039645d551 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 27 minutes ago Up 27 minutes web3 d39dea20d5ea linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 36 minutes ago Up 36 minutes web2 bced86e3dcb2 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 36 minutes ago Up 36 minutes web1 [root@node1 ~]# docker container exec b1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:2e:06:0b brd ff:ff:ff:ff:ff:ff inet 192.168.0.31/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe2e:60b/64 scope link valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue link/ether 02:42:63:bb:cd:b4 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:63ff:febb:cdb4/64 scope link valid_lft forever preferred_lft forever 11: vethffac5a0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 link/ether 82:b0:23:49:92:de brd ff:ff:ff:ff:ff:ff inet6 fe80::80b0:23ff:fe49:92de/64 scope link valid_lft forever preferred_lft forever 13: veth9e7a8c4@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 link/ether a6:d3:a2:c9:9f:45 brd ff:ff:ff:ff:ff:ff inet6 fe80::a4d3:a2ff:fec9:9f45/64 scope link valid_lft forever preferred_lft forever
Tip: when running the bubybox container, you should pay attention to the programs running in the front desk of the container. If there is no running program in the front desk, the container will be in the exit state as soon as it is started. I first run / bin/sh interactively, then release it with ctrl+p, and then press ctrl+q to split the current terminal to make the b1 container run. From the above information, we can see that the network of b1 container When a network is added to a web4 container, both b1 and web4 are in the same network namespace; web4 is a shared host network namespace, so the network interface on the host can also be seen in b1's network;
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2b7517adb2ad busybox "sh" 15 minutes ago Up 15 minutes b1 f2dc3d0bb699 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 38 minutes ago Up 38 minutes web4 a0039645d551 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 42 minutes ago Up 42 minutes web3 d39dea20d5ea linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 52 minutes ago Up 52 minutes web2 bced86e3dcb2 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 52 minutes ago Up 52 minutes web1 [root@node1 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 9 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@node1 ~]# docker attach b1 / # netstat -tnl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 ::1:25 :::* LISTEN / # read escape sequence [root@node1 ~]# docker container exec -it web4 /bin/sh / # netstat -ntl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 ::1:25 :::* LISTEN / # exit [root@node1 ~]#
Tip: b1 and web4 are in the same network namespace as the host computer. 80 of the 80 monitored on web4 can be seen on the host computer and b1, and other ports monitored on the host computer can also be seen on b1 and web4. To enter the stripped terminal container, you need to use the docker attach Command entry; in this way, we can access the services inside the container by accessing port 80 of the host; we can also access the services on web4 by accessing the lo interface of host b1
[root@node1 ~]# docker attach b1 / # ps aux PID USER TIME COMMAND 1 root 0:00 sh 14 root 0:00 ps aux / # wget -O - -q http://127.0.0.1 this test file / # read escape sequence [root@node1 ~]#
Tip: Although the services on web4 can be accessed through the lo interface, their other resources are isolated from each other, only the network namespace is the same;
In addition, we can inject some information into the container through some options of docker run when running the container
Example: inject hostname into the container
[root@node1 ~]# docker run --name web01 -it --rm --net bridge --hostname www.ilinux.io linux1874/myimg:v0.1 /bin/sh / # hostname www.ilinux.io / # exit [root@node1 ~]#
Tip: you can see that the internal host name of the container is the host name given when we run the container; -- rm means to delete the container when exiting the container; - d and -- rm cannot be used at the same time, which are mutually exclusive;
Example: specify the hostname and host address resolution list from the outside
[root@node1 ~]# docker run --name web01 -it --rm --net bridge --hostname www.ilinux.io --add-host www.test.com:1.1.1.1 --add-host www.test2.com:2.2.2.2 linux1874/myimg:v0.1 /bin/sh / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 1.1.1.1 www.test.com 2.2.2.2 www.test2.com 172.17.0.4 www.ilinux.io www / # exit [root@node1 ~]#
Tip: - add host option can be used repeatedly many times. In application scenarios, if the way of resolving the host name is needed for communication between multiple containers, we need to use the resolution of injecting the host name to the host;
Example: inject dns server address to container
[root@node1 ~]# docker run --name web01 -it --rm --net bridge --hostname www.ilinux.io --add-host www.test.com:1.1.1.1 --add-host www.test2.com:2.2.2.2 --dns 3.3.3.3 --dns 4.4.4.4 linux1874/myimg:v0.1 /bin/sh / # cat /etc/resolv.conf nameserver 3.3.3.3 nameserver 4.4.4.4 / # exit [root@node1 ~]#
Tip: - dns means to specify dns server address; this option can also be used multiple times at the same time to specify multiple dns server addresses;
Example: inject dns search
[root@node1 ~]# docker run --name web01 -it --rm --net bridge --hostname www.ilinux.io --add-host www.test.com:1.1.1.1 --add-host www.test2.com:2.2.2.2 --dns 3.3.3.3 --dns 4.4.4.4 --dns-search test.com --dns-search test2.com linux1874/myimg:v0.1 /bin/sh / # cat /etc/resolv.conf search test.com test2.com nameserver 3.3.3.3 nameserver 4.4.4.4 / # exit [root@node1 ~]#
Tip: - the - dns search option means to specify dns search; it can also be used multiple times at the same time to specify multiple DNS searches;
Example: inject ip address as container ip address
[root@node1 ~]# docker run --name web01 -it --rm --net bridge --hostname www.ilinux.io --add-host www.test.com:1.1.1.1 --add-host www.test2.com:2.2.2.2 --dns 3.3.3.3 --dns 4.4.4.4 --dns-search test.com --dns-search test2.com --ip 172.17.0.20 linux1874/myimg:v0.1 /bin/sh docker: Error response from daemon: user specified IP address is supported on user defined networks only. [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 2ae9aa656656 bridge bridge local 93347fb33d89 host host local a99b876eee4d none null local [root@node1 ~]# docker network create --subnet 10.0.0.0/24 mynet e80b4e4cc6e9f2c772797c27b0cd81b93cc874e0ffbc5cc2ea2d3a1d9d5530fb [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 2ae9aa656656 bridge bridge local 93347fb33d89 host host local e80b4e4cc6e9 mynet bridge local a99b876eee4d none null local [root@node1 ~]# docker run --name web01 -it --rm --net mynet --hostname www.ilinux.io --add-host www.test.com:1.1.1.1 --add-host www.test2.com:2.2.2.2 --dns 3.3.3.3 --dns 4.4.4.4 --dns-search test.com --dns-search test2.com --ip 10.0.0.200 linux1874/myimg:v0.1 /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 29: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:0a:00:00:c8 brd ff:ff:ff:ff:ff:ff inet 10.0.0.200/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever / # exit [root@node1 ~]#
Tip: by default, docker does not support the IP address specified by the startup container. The specified IP address can only be specified on the custom network. To specify the IP address for the container, first create a network namespace by yourself. When the container is running, specify the network through -- net or -- network, and then specify the IP address;
Example: connecting multiple networks to a container
[root@node1 ~]# docker run --name web1 -it --rm --network bridge linux1874/myimg:v0.1 /bin/sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 52: eth0@if53: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever / # [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 03e2689873d0 bridge bridge local 93347fb33d89 host host local e80b4e4cc6e9 mynet bridge local a99b876eee4d none null local [root@node1 ~]# docker network connect mynet web1 [root@node1 ~]# docker attach web1 / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 52: eth0@if53: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 54: eth1@if55: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 brd 10.0.0.255 scope global eth1 valid_lft forever preferred_lft forever / #
Tip: docker nerwork connect is similar to adding a network to the container, and then connecting the network card in the container to our designated network;
Port exposure
In general, we use the bridge network to run docker containers. Only in special scenarios can we use custom networks and other networks. One problem with using bridge networks is how can the services in the containers be accessed by external hosts? First of all, it's impossible for external access to the network inside the container. The reason is that the docker0 bridge is a nat bridge, which can only allow internal containers to access external hosts, and can't use external hosts to access internal containers. If we want to access, we need to modify the iptables rules, which is very troublesome. In addition, we can build a reverse proxy on the host, which can be used externally Access to the agent, and then the agent accesses the internal services of the container. This way is feasible, but it is usually not. In addition, only map the service port running in the container to a port on the host, and do DNAT through the iptables rule by accessing a port on the host; allow the external host to access the internal services of the container through the port mapping method; in fact, dock The command "Er container run" has a - p option to do this. We use the - p option to specify that the ip or port of the host should be mapped to an ip or port of the container. Its principle is to do DNAT by modifying the iptables rule of the host;
Example: mapping container 80 port to a dynamic port of the host
[root@node1 ~]# docker run --name web01 -d -p 80 linux1874/myimg:v0.1 7dd827a920f07badc79911120aa9fee5dc74d8d9959f554ac9c19d911befd31c [root@node1 ~]# docker container port web01 80/tcp -> 0.0.0.0:32768 [root@node1 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 11 852 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1 packets, 76 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 1 packets, 76 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !br-e80b4e4cc6e9 10.0.0.0/24 0.0.0.0/0 6 379 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.4 172.17.0.4 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- br-e80b4e4cc6e9 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.4:80 [root@node1 ~]#
Tip: - p Specify the port of the service in the container, which means to map the port of the container to a random port on the host, usually starting from 32768. From the above interface, we can see that we map port 80 to port 32768 on the host, and then docker will add a DNAT rule on the nat table of iptables of the host. Now we can access the host by visiting The 32768 port of the host is used to access the services inside the container. Generally, we do not recommend randomly mapping the services in the container to a port on the host. The reason is that the client does not know what the services behind the random port are, of course, access is limited;
Test: visit the 32768 port of the host to see if you can access the web service inside the container?
[root@docker_node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:22:36:7f brd ff:ff:ff:ff:ff:ff inet 192.168.0.22/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe22:367f/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:72:cb:66:6d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@docker_node1 ~]# curl 192.168.0.31:32768 this test file [root@docker_node1 ~]#
Example: mapping a container port to a specified host port
[root@node1 ~]# docker run --name web02 -d --net bridge -p 80:80 linux1874/myimg:v0.1 351ed7833aa1908c43abc061f5acdc7bd5e083ea097b26c1f4d4a7422bfbfcfa [root@node1 ~]# docker container port web02 80/tcp -> 0.0.0.0:80 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 351ed7833aa1 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 23 seconds ago Up 21 seconds 0.0.0.0:80->80/tcp web02 7dd827a920f0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 10 minutes ago Up 10 minutes 0.0.0.0:32768->80/tcp web01 [root@node1 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::32768 :::* [root@node1 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 2 packets, 466 bytes) pkts bytes target prot opt in out source destination 12 912 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 2 packets, 466 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1 packets, 76 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 1 packets, 76 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !br-e80b4e4cc6e9 10.0.0.0/24 0.0.0.0/0 6 379 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.4 172.17.0.4 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- br-e80b4e4cc6e9 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.4:80 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 [root@node1 ~]#
Tip: you can see that we specified to map port 80 of the container to port 80 of the host, and port 80 of the host is in listening state; at the same time, a new DNT rule is generated on the nat table of iptables;
Test: use other hosts to access the 80 on the host to see if they can access the 80 service in the container?
[root@docker_node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:22:36:7f brd ff:ff:ff:ff:ff:ff inet 192.168.0.22/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe22:367f/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:72:cb:66:6d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@docker_node1 ~]# curl 192.168.0.31 this test file [root@docker_node1 ~]#
Example: map the specified container port to the dynamic port on the specified ip of the host
[root@node1 ~]# docker run --name web03 -d --net bridge -p 192.168.0.214::80 linux1874/myimg:v0.1 b4276ea0d0c0265a608b8111d64655a8667871f1b19cf971b42921a9a464c020 [root@node1 ~]# docker container port web03 80/tcp -> 192.168.0.214:32769 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b4276ea0d0c0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 15 seconds ago Up 15 seconds 192.168.0.214:32769->80/tcp web03 351ed7833aa1 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 10 minutes ago Up 10 minutes 0.0.0.0:80->80/tcp web02 7dd827a920f0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 20 minutes ago Up 20 minutes 0.0.0.0:32768->80/tcp web01 [root@node1 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 192.168.0.214:32769 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::32768 :::* [root@node1 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 15 1068 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 2 packets, 152 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 2 packets, 152 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !br-e80b4e4cc6e9 10.0.0.0/24 0.0.0.0/0 6 379 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.4 172.17.0.4 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- br-e80b4e4cc6e9 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.4:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:32769 to:172.17.0.3:80 [root@node1 ~]#
Tip: you can see that 32769 of 192.168.0.214 on the host is in the listening state; at the same time, a DNAT rule is added to the nat representation of iptables;
Test: access the 32769 port of 192.168.0.214 of the host through other hosts to see if you can access the 80 service inside the container?
[root@docker_node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:22:36:7f brd ff:ff:ff:ff:ff:ff inet 192.168.0.22/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe22:367f/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:72:cb:66:6d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@docker_node1 ~]# curl 192.168.0.214:32769 this test file [root@docker_node1 ~]#
Example: mapping the port of the container to the specified port of the specified ip of the host
[root@node1 ~]# docker run --name web04 -d --net bridge -p 192.168.0.214:81:80 linux1874/myimg:v0.1 1f58cd21b2c861b8761032a07d9e47d3bf1a2c072816d931918171ea38a9c090 [root@node1 ~]# docker container port web04 80/tcp -> 192.168.0.214:81 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1f58cd21b2c8 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 19 seconds ago Up 17 seconds 192.168.0.214:81->80/tcp web04 b4276ea0d0c0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 7 minutes ago Up 7 minutes 192.168.0.214:32769->80/tcp web03 351ed7833aa1 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 17 minutes ago Up 17 minutes 0.0.0.0:80->80/tcp web02 7dd827a920f0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 27 minutes ago Up 27 minutes 0.0.0.0:32768->80/tcp web01 [root@node1 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.0.214:81 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 192.168.0.214:32769 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::32768 :::* [root@node1 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 16 1128 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1 packets, 76 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 1 packets, 76 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !br-e80b4e4cc6e9 10.0.0.0/24 0.0.0.0/0 6 379 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.4 172.17.0.4 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.5 172.17.0.5 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- br-e80b4e4cc6e9 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.4:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:32769 to:172.17.0.3:80 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:81 to:172.17.0.5:80 [root@node1 ~]#
Tip: you can see that port 81 on the host is in listening state, and a DNAT rule is added to the nat table in iptables;
Test: use other hosts to access port 81 of 192.168.0.214 to see if they can access 80 services in the container?
[root@docker_node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:22:36:7f brd ff:ff:ff:ff:ff:ff inet 192.168.0.22/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe22:367f/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:72:cb:66:6d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@docker_node1 ~]# curl 192.168.0.214:81 this test file [root@docker_node1 ~]#
Example: mapping multiple ports of a container to a host
[root@node1 ~]# docker run --name web05 -d --net bridge -p 80 -p 443 linux1874/myimg:v0.1 898e99cf0f37104954913488717969584d35677a0660874a52a502934599caf7 [root@node1 ~]# docker container port web05 443/tcp -> 0.0.0.0:32770 80/tcp -> 0.0.0.0:32771 [root@node1 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.0.214:81 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 192.168.0.214:32769 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::32768 :::* LISTEN 0 128 :::32770 :::* LISTEN 0 128 :::32771 :::* [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 898e99cf0f37 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 31 seconds ago Up 29 seconds 0.0.0.0:32771->80/tcp, 0.0.0.0:32770->443/tcp web05 1f58cd21b2c8 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 19 minutes ago Up 19 minutes 192.168.0.214:81->80/tcp web04 b4276ea0d0c0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 26 minutes ago Up 26 minutes 192.168.0.214:32769->80/tcp web03 351ed7833aa1 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 35 minutes ago Up 35 minutes 0.0.0.0:80->80/tcp web02 7dd827a920f0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 46 minutes ago Up 46 minutes 0.0.0.0:32768->80/tcp web01 [root@node1 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 17 1188 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 228 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 3 packets, 228 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !br-e80b4e4cc6e9 10.0.0.0/24 0.0.0.0/0 6 379 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.4 172.17.0.4 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.5 172.17.0.5 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.6 172.17.0.6 tcp dpt:443 0 0 MASQUERADE tcp -- * * 172.17.0.6 172.17.0.6 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- br-e80b4e4cc6e9 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.4:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:32769 to:172.17.0.3:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:81 to:172.17.0.5:80 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32770 to:172.17.0.6:443 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32771 to:172.17.0.6:80 [root@node1 ~]#
Tip: - P option can be used multiple times to specify the port mapping relationship between the port to be mapped and the host; in addition, use - P (upper case) plus -- expose to specify the port to be mapped
[root@node1 ~]# docker run --name web06 -d --net bridge -P --expose 80 --expose 443 linux1874/myimg:v0.1 b377f6d380828ece81d69fd83cf7678800466f45689850a7e47dde1619602ffa [root@node1 ~]# docker container port web06 443/tcp -> 0.0.0.0:32772 80/tcp -> 0.0.0.0:32773 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b377f6d38082 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 19 seconds ago Up 18 seconds 0.0.0.0:32773->80/tcp, 0.0.0.0:32772->443/tcp web06 898e99cf0f37 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 3 minutes ago Up 3 minutes 0.0.0.0:32771->80/tcp, 0.0.0.0:32770->443/tcp web05 1f58cd21b2c8 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 22 minutes ago Up 22 minutes 192.168.0.214:81->80/tcp web04 b4276ea0d0c0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 29 minutes ago Up 29 minutes 192.168.0.214:32769->80/tcp web03 351ed7833aa1 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 39 minutes ago Up 39 minutes 0.0.0.0:80->80/tcp web02 7dd827a920f0 linux1874/myimg:v0.1 "/bin/sh -c '/bin/ht..." 49 minutes ago Up 49 minutes 0.0.0.0:32768->80/tcp web01 [root@node1 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.0.214:81 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 192.168.0.214:32769 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::32768 :::* LISTEN 0 128 :::32770 :::* LISTEN 0 128 :::32771 :::* LISTEN 0 128 :::32772 :::* LISTEN 0 128 :::32773 :::* [root@node1 ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 17 1188 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 228 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 3 packets, 228 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !br-e80b4e4cc6e9 10.0.0.0/24 0.0.0.0/0 6 379 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.4 172.17.0.4 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.5 172.17.0.5 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.6 172.17.0.6 tcp dpt:443 0 0 MASQUERADE tcp -- * * 172.17.0.6 172.17.0.6 tcp dpt:80 0 0 MASQUERADE tcp -- * * 172.17.0.7 172.17.0.7 tcp dpt:443 0 0 MASQUERADE tcp -- * * 172.17.0.7 172.17.0.7 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- br-e80b4e4cc6e9 * 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.4:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:32769 to:172.17.0.3:80 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.0.214 tcp dpt:81 to:172.17.0.5:80 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32770 to:172.17.0.6:443 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32771 to:172.17.0.6:80 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32772 to:172.17.0.7:443 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32773 to:172.17.0.7:80 [root@node1 ~]#
Tip: use - P (upper case) - expose to expose the port. You can't select a port or random port of an IP that is exposed to the host. You can only expose the random port of all the IP of the host. In short, you can't specify the IP and port of the host;
Example: modify the network properties of docker0 Bridge
Tip: to modify the network information of docker0 bridge, we need to modify the network information in / etc/docker/daemon.json A bip is added to this file to specify the network address of the docker0 bridge. It should be noted that if there is no content in this file, we can directly use braces to add "bip": "ip address and subnet mask" can be used. If there are other items, please note the comma after them, as shown above. If there are other subsequent configurations, you need to add comma after the last item, and then add other configurations. You can leave comma at the end. In fact, there are many configurations in this file, and there are many attributes for configuring the docker0 bridge, but the core configuration options are It is bip, which is mainly used to specify the ip address and subnet mask information of the docker0 bridge. Other options can not be specified. They can be automatically calculated and generated through bip. For / etc/docker/daemon.json Please refer to official documents for relevant instructions of documents https://docs.docker.com/engine/reference/commandline/dockerd/#run-multiple-daemons;