Dcoker sets up zookeeper cluster operation

Using Docker to build zookeeper cluster environment

Start docker service

The above situation indicates that the docker is not started and the docker service needs to be started manually

service docker start

This error is reported when starting the service

[root@localhost ~]# docker start 722cb567ad8b
Error response from daemon: driver failed programming external connectivity on endpoint mall-user-service (f83187d7e06975dbfb8d83d45a6907bf8e575be0aedee8aed9ea694cc90e5b97):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8184 -j DNAT --to-destination 172.18.6.2:8184 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1))
Error: failed to start containers: 722cb567ad8b

Just restart docker at this time (systemctl docker restart)

Attach a firewall (systemctl stop firewalld) that closes cent7

It is recommended to set docker as a daemon, i.e. startup and self startup

systemctl enable docker # Start docker automatically after startup

systemctl start docker # Start docker
systemctl restart docker # Restart dokcer

Use Alibaba cloud image accelerator to pull the official image of zookeeper from the central warehouse

docker pull zookeeper

Test start zookeeper

Start zookeeper in the background

docker run -d --name my_zk zookeeper

docker logs -f my_zk: check the log print to ensure successful startup

Start the zk client and test the connection to the zk server

Use the ZK command line client to connect to ZK. Because the ZK container we just started is not bound to the port of the host, we cannot access it directly. However, we can access the ZK container through the link mechanism of Docker. Execute the following command:

 docker run -it --rm --link my_zookeeper:zookeeper zookeeper zkCli.sh -server zookeeper

Start a zookeeper image and run the zkCli.sh command in the image. The command parameter is "- server zookeeper"

-rm: delete the container when it stops running

–link my_zk:zookeeper: map the ip and domain name of the server container in the current client container. The domain name is zookeeper

Name the one we started earlier as my_zk's container is linked to the newly created container, and its host name is zookeeper

After we execute this command, we can operate ZK service as normal using ZK command line client

Construction of ZK cluster

1. Create docker-compose.yml file

version: '2'
services:
  zoo1:
    image: zookeeper:3.4 # Image name
    restart: always # Automatic restart in case of error
    hostname: zoo1
    container_name: zoo1
    privileged: true
    ports: # port
      - 2181:2181
    volumes: # Mount data volume
      - ./zoo1/data:/data
      - ./zoo1/datalog:/datalog 
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 1 # Node ID
      ZOO_PORT: 2181 # zookeeper port number
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 # zookeeper node list
    networks:
      default:
        ipv4_address: 192.168.112.11

  zoo2:
    image: zookeeper:3.4
    restart: always
    hostname: zoo2
    container_name: zoo2
    privileged: true
    ports:
      - 2182:2181
    volumes:
      - ./zoo2/data:/data
      - ./zoo2/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 2
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      default:
        ipv4_address: 192.168.112.12

  zoo3:
    image: zookeeper:3.4
    restart: always
    hostname: zoo3
    container_name: zoo3
    privileged: true
    ports:
      - 2183:2181
    volumes:
      - ./zoo3/data:/data
      - ./zoo3/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 3
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      default:
        ipv4_address: 192.168.112.13

networks:
  default:
    external:
      name: mynet

This configuration file will tell Docker to run three zookeeper images respectively, and bind the local ports 2181, 2182 and 2183 to the corresponding container port 2181. Zoom_ MY_ id and zoom_ Servers are two environment variables that need to be set to build ZK clusters, among which zoom_ MY_ id represents the id of ZK service. It is an integer between 1 and 255 and must be unique in the cluster_ Servers is the host list of ZK cluster

server.1=zoo1:2888:3888 
server.2=zoo2:2888:3888 
server.3=zoo3:2888:3888

Interpretation of configuration parameters

server.A=B:C:D

A is a number indicating the server number;

In the cluster mode, configure A file myid, which is in the dataDir directory. There is A value of A in this file. When Zookeeper starts, read this file and compare the data with the configuration information in zoo.cfg to determine which server it is.

B is the address of the server;

C is the port where the Follower server exchanges information with the Leader server in the cluster;

D is that in case the Leader server in the cluster hangs up, a port is needed to re elect and select a new Leader, and this port is the port used to communicate with each other during the election

Don't forget to build a network of zookeeper clusters

docker network create --driver bridge --subnet 192.168.112.0/16 --gateway 192.168.112.0 mynet

– driver bridge: Bridge Construction – subnet 192.168.112.0/16: subnet – gateway 192.168.112.0: Gateway

You can view the details of the currently created network through the following command:

docker network inspect mynet

docker network

Docker has three network modes: bridge, host and none. When you create a container, do not specify - network is bridge by default.

This requires a separate study of docker's network communication

  • Bridge: assign IP to each container, connect the container to a docker0 virtual bridge, and communicate with the host through the docker0 bridge. That is, in this mode, you cannot use the IP + container mapping port of the host to communicate between Docker containers.
  • Host: the container will not virtualize its own network card and configure its own IP, but use the IP and port of the host. In this way, the communication between Docker containers can map ports with the IP + container of the host
  • none: no network.

2, Install zookeeper (stand-alone & cluster) under Docker

2. Run in the current directory of docker-compose.yml

3. View the created zookeeper cluster

Or enter it in the directory where the docker-compose.yml file is located

docker-compose ps

4. Enter the cluster to view the cluster status

docker exec -it zoo1 /bin/bash
 cd bin
zkServer.sh status

If zk connection fails, there may be a problem with the above network configuration, or you can try to create a new myid under the data file to store the ID of the current zk (if there is no configuration in docker compose)

If it is a network configuration problem, use the following docker-compose.yml file

version: '2'
services:
  zoo1:
    image: zookeeper:3.4 # Image name
    restart: always # Automatic restart in case of error
    hostname: zoo1
    container_name: zoo1
    privileged: true
    ports: # port
      - 2181:2181
    volumes: # Mount data volume
      - ./zoo1/data:/data
      - ./zoo1/datalog:/datalog 
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 1 # Node ID
      ZOO_PORT: 2181 # zookeeper port number
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 # zookeeper node list

  zoo2:
    image: zookeeper:3.4
    restart: always
    hostname: zoo2
    container_name: zoo2
    privileged: true
    ports:
      - 2182:2181
    volumes:
      - ./zoo2/data:/data
      - ./zoo2/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 2
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo3:
    image: zookeeper:3.4
    restart: always
    hostname: zoo3
    container_name: zoo3
    privileged: true
    ports:
      - 2183:2181
    volumes:
      - ./zoo3/data:/data
      - ./zoo3/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 3
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

Or try adding myid

reference resources

docker builds zookeeper 3.5 cluster (super detailed)

Build Zookeeper cluster in Docker mode

Install zookeeper (stand-alone & cluster) under Docker

Docker builds zookeeper & Kafka cluster

Posted on Fri, 19 Nov 2021 05:07:31 -0500 by bg