docker container network

docker container network

docker container network

Docker automatically provides three networks after installation and can be viewed using the docker network ls command

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
6cf45e2c0e7d   bridge    bridge    local
b11e58ca673a   host      host      local
f023a89f7f92   none      null      local
[root@localhost ~]# 

Docker uses a Linux bridge to virtual a Docker container bridge (docker0) on the host machine. When Docker starts a container, it assigns the container an IP address, called Container-IP, based on the segment of the Docker bridge, and the Docker bridge is the default gateway for each container. Because containers within the same host are connected to the same network bridge, direct communication between containers is possible through Container-IP of the container.

Four network modes of docker

Network modeTo configureExplain
host–network hostContainer and host share Network namespace
container–network container:NAME_OR_IDContainer shares Network namespace with another container
none–network noneContainers have separate network namespaces but do not have any network settings for them, such as assigning Veth pairs and bridge connections, configuring IP, etc.
bridge–network bridgeDefault mode

bridge mode

For example, running a container to view IP on the host machine

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:be:41:85 brd ff:ff:ff:ff:ff:ff
    inet 192.168.244.144/24 brd 192.168.244.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::c8bb:96e4:534:b9f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default  #docker gateway
    link/ether 02:42:4e:ac:ee:aa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:4eff:feac:eeaa/64 scope link 
       valid_lft forever preferred_lft forever
9: vethb7b2b8a@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether fa:dc:84:e4:86:9f brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::f8dc:84ff:fee4:869f/64 scope link 
       valid_lft forever preferred_lft forever
11: veth2bf6412@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether a2:c1:40:37:9b:6f brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::a0c1:40ff:fe37:9b6f/64 scope link 
       valid_lft forever preferred_lft forever

When the Docker process starts, a virtual network bridge named docker0 is created on the host, and the Docker container started on the host connects to the virtual network bridge. Virtual bridges work like physical switches so that all containers on the host are connected to a two-tier network through the switches.

Assign an IP from the docker0 subnet to the container for use, and set the IP address of the docker0 as the default gateway for the container. Create a pair of virtual network card veth pair devices on the host. Docker places one end of the veth pair device in the newly created container, named eth0 (the container's network card), and the other end in the host, named vethxxx after a similar name, and adds this network device to the docker0 bridge. You can view it through the brctl show command.

The bridge mode is the docker's default network mode, and the Write-no-network parameter is the bridge mode. When using docker run-p, docker actually makes DNA T rules in iptables to implement port forwarding. You can view it using iptables-t nat-vnL.

The bridge pattern is shown below:

Assuming that nginx is running in docker2 above, let's think about a few questions:

  • Is direct communication possible between two containers on the same host? For example, can you directly access the nginx site of docker2 on docker1?
  • Can I directly access the nginx site of docker2 on the host machine?
  • How do I access this nginx site on node1 on another host? DNA T release?

Docker bridges are virtual hosts, not real network devices, and external networks are inaccessible, which also means that external networks cannot access containers through direct Container-IP. If the container wants external access to be accessible, it can be enabled by mapping the container port to the host host host (port mapping), that is, docker run creates the container with the -p or -P parameter and accesses the container with [host IP]: [container port].

container mode

This pattern specifies that the newly created container and an existing container share a Network Namespace instead of sharing it with the host. The newly created container does not create its own network card, configure its own IP, but shares IP, port range, and so on with a specified container. Similarly, the two containers are isolated except for network aspects, such as file systems, process lists, and so on. The processes of the two containers can communicate through the lo network card device.

The container pattern is shown in the following figure:

host mode

If the host mode is used when starting a container, the container will not get a separate Network Namespace but will share a Network Namespace with the host. Containers will not virtual out their own network cards, configure their own IP, etc., but use the host's IP and port. However, other aspects of the container, such as the file system, process list, and so on, are isolated from the host.

Containers using host mode can communicate with the outside world directly using the host's IP address. Service ports inside the container can also use the host's port without NAT. The most advantage of hosts is that network performance is better, but ports already used on docker host can no longer be used and network isolation is poor.

The Host mode is shown in the following figure:

none mode

Using none mode, the Docker container has its own Network Namespace, but no network configuration is made for the Docker container. That is, this Docker container does not have network card, IP, routing, etc. We need to add network cards, configure IP, etc. for the Docker container ourselves.

In this network mode, the container has only lo loopback network and no other network card. The none mode can be specified at container creation through the network none. This type of network can not be connected to the network, and a closed network can very well ensure the security of the container.

Scenarios:

  • Start a container to process data, such as converting data formats
  • Some background computing and processing tasks

The none mode is shown in the following figure:

docker network inspect bridge #View detailed configuration of the running container bridge network

[root@localhost ~]# docker network inspect bridge 
[
    {
        "Name": "bridge",   #Default network type
        "Id": "6cf45e2c0e7d515360c691bcead1e9154b14bed606d97aa3e6a37054e7e17bd5",
        "Created": "2021-12-03T18:13:19.316183172+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"     #gateway
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9cf28ddb005a884ff9e8e958038b0e0ab2b44e700824ec91bea21aeeb7be9b74": {
                "Name": "nginx2",   #Running nginx Container
                "EndpointID": "3c4977966670ce6274127d4b97129b75df5ace3d567a7212a74c77bdb0e8857c",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",    #IP Address   
                "IPv6Address": ""
            },
            "fcf365bc138c76748702806564e830fda299e9b806ba4e45cf978059ff9743f8": {
                "Name": "mysql",     #MySQL Container
                "EndpointID": "60321be9026aa609f33ba8f4d2da1ed571e35ac49aed22b7099a0561bbec44d5",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",   #IP Address
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Tags: Docker network Container

Posted on Fri, 03 Dec 2021 22:52:05 -0500 by grga