1, Install docker
https://blog.csdn.net/weixin_39912640/article/details/120391027
2, Install zookeeper cluster
The Clickhouse stand-alone does not use the zookeeper cluster. In the cluster mode, the inverted zookeeper cluster is required, but the zookeeper cluster is not necessarily required.
https://blog.csdn.net/weixin_39912640/article/details/120392104
standalone mode
- 1. Open the Windows console (Win + R - > Enter CMD - > Enter) - > Enter
- 2. See which images are available
C:\Users\admin>docker search clickhouse NAME DESCRIPTION STARS OFFICIAL AUTOMATED yandex/clickhouse-server ClickHouse is an open-source column-oriented... 340 [OK] yandex/clickhouse-client Native client for the Clickhouse database ma... 82 [OK] spoonest/clickhouse-tabix-web-client tabix: https://github.com/tabixio/tabix 9 [OK] nikepan/clickhouse-bulk Collects small insterts and sends big reques... 4 f1yegor/clickhouse-exporter ClickHouse exporter for Prometheus 3 [OK] alexakulov/clickhouse-backup Tool for easy ClickHouse backup and restore ... 3 altinity/clickhouse-operator ClickHouse Operator for Kubernetes (beta) 2 tacyuuhon/clickhouse-chproxy This is a Docker images of a chproxy 2 yandex/clickhouse-binary-builder 1 yandex/clickhouse-python-bottle 1 flant/clickhouse-exporter Clickhouse Prometheus exporter 0 yandex/clickhouse-stateful-test 0 yandex/clickhouse-integration-helper 0 yandex/clickhouse-stateless-test 0 podshumok/clickhouse My attempt to build Clickhouse with CPU-opti... 0 datagrip/clickhouse ClickHouse image with an external dictionary 0 [OK] yandex/clickhouse-stress-test 0 yandex/clickhouse-integration-test 0 yandex/clickhouse-integration-tests-runner 0 yandex/clickhouse-s3-proxy 0 yandex/clickhouse-unit-test 0 yandex/clickhouse-deb-builder 0 muxinc/clickhouse-server https://hub.docker.com/r/yandex/clickhouse-s... 0 crobox/clickhouse Clickhouse server image that only uses IPv4 0 [OK] yandex/clickhouse-fuzzer
2. Select an image. Generally, select the image with the largest number of stars
# Server docker pull yandex/clickhouse-server # client docker pull yandex/clickhouse-client
tips: it will the basic use of docker.
3. Run the temporary container temp Clickhouse server
Start a temporary server to obtain the configuration file of ck
docker run --rm -d --name=temp-clickhouse-server yandex/clickhouse-server C:\Users\admin>docker run --rm -d --name=temp-clickhouse-server yandex/clickhouse-server Unable to find image 'yandex/clickhouse-server:latest' locally latest: Pulling from yandex/clickhouse-server 35807b77a593: Pull complete 227b2ff34936: Pull complete 1f146c18eea9: Pull complete 2f4cc4c74be7: Pull complete 6c77b18b2086: Pull complete 15926356e6d0: Pull complete 193cbb933a84: Pull complete Digest: sha256:e7f902cc787bb1ef13dd2da6f0655895bfd43fb1556380ec666fd754714ca519 Status: Downloaded newer image for yandex/clickhouse-server:latest af0aa200d1712290dc04cf912a0e78728bd444920b71917799da2b5621700709
4. config configuration and users configuration map Windows hard disk directory
C:\Users\admin>docker cp temp-clickhouse-server:/etc/clickhouse-server/config.xml F:/clickhouse/conf/config.xml C:\Users\admin>docker cp temp-clickhouse-server:/etc/clickhouse-server/users.xml F:/clickhouse/conf/users.xml
- 3. Check whether the profile copy completed successfully
5. Create account
This account is used by the client to connect to the server
- 1. Enter the temporary container temp Clickhouse server, and the Windows console continues to execute the command
docker exec -it temp-clickhouse-server /bin/bash
- 2. Execute the command in the container to generate SHA256 of the account, such as account: zhao password: zhao
- PASSWORD=$(base64 < /dev/urandom | head -c8); echo "zhao"; echo -n "zhao" | sha256sum | tr -d '-'
C:\Users\admin>docker exec -it temp-clickhouse-server /bin/bash root@af0aa200d171:/# PASSWORD=$(base64 < /dev/urandom | head -c8); echo "zhao"; echo -n "zhao" | sha256sum | tr -d '-' # user name zhao # Encrypted password 4ba739741e69e5d4df6d596c94901c58f72c48caaf8711be6fead80e2fa54ddd root@af0aa200d171:/#
- 3. Modify the F:/clickhouse/conf/users.xml file
Find the tag and add this user directly in the tag
<users> <zhao> <password_sha256_hex>4ba739741e69e5d4df6d596c94901c58f72c48caaf8711be6fead80e2fa54ddd</password_sha256_hex> <networks incl="networks" replace="replace"> <ip>::/0</ip> </networks> <profile>zhao</profile> <quota>zhao</quota> </zhao> </users>
6. Modify listening host
Modify the F:/clickhouse/conf/config.xml file
Remove the comment from this tag to make it effective, which means it can be accessed by the Internet. If it is not opened, there may be a problem of network failure
<listen_host>0.0.0.0</listen_host>
tips: usually IPV4. If IPV6 is changed to < listen_ host>::</listen_ host>
7. Destroy temporary containers
C:\Users\admin>docker stop af0aa200d171 af0aa200d171
8. Run ClickHouse service
- (1) Create directories: F:/clickhouse/data to store data and F:/clickhouse/log to store logs
- (2) The Windows console executes commands, maps ports 8123, 9000 and 9009, and maps data, configuration and logs to Windows hard disk
docker run -d --name=single-clickhouse-server -p 8123:8123 -p 9000:9000 -p 9009:9009 --ulimit nofile=262144:262144 --volume F:/clickhouse/data:/var/lib/clickhouse:rw --volume F:/clickhouse/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/log:/var/log/clickhouse-server:rw yandex/clickhouse-server
Parameter interpretation
docker run -d # Container name --name=single-clickhouse-server # Port mapping -p 8123:8123 -p 9000:9000 -p 9009:9009 --ulimit nofile=262144:262144 #Mapping data directory --volume F:/clickhouse/data:/var/lib/clickhouse:rw # Mapping profile --volume F:/clickhouse/conf:/etc/clickhouse-server:rw #Mapping log --volume F:/clickhouse/log:/var/log/clickhouse-server:rw yandex/clickhouse-server
Installation complete
C:\Users\admin>docker run -d --name=single-clickhouse-server -p 8123:8123 -p 9000:9000 -p 9009:9009 --ulimit nofile=262144:262144 --volume F:/clickhouse/data:/var/lib/clickhouse:rw --volume F:/clickhouse/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/log:/var/log/clickhouse-server:rw yandex/clickhouse-server a2a3e46643ac313554be38a260bae290e75d6cffaa7db202a4cbf52ad2712cfb
View the clickhouse status and the installation is complete
C:\Users\admin>docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a2a3e46643ac yandex/clickhouse-server "/entrypoint.sh" 4 minutes ago Up 4 minutes 0.0.0.0:8123->8123/tcp, :::8123->8123/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9009->9009/tcp, :::9009->9009/tcp single-clickhouse-server
9. Client connection ck
-
DBeaver related
-
1. Download Community Edition
https://dbeaver.io/files/dbeaver-ce-latest-x86_64-setup.exe -
2. Installation
Just the next step -
3. Connect ClickHouse
-
3.1. Input host IP - > input port number - > input account and password - > test link - > complete
If you want to download the driver for the first time, you will enter this interface
Just click download
The setup of this stand-alone version has been completed
Stand alone reference
https://blog.csdn.net/u010318957/article/details/114290585
Cluster mode
Environment: zookeeper cluster has been built
And there is already a self built bridge network. If the ip network of the host is used, the ck cannot communicate internally, nor can it communicate with the zookeeper in the docker
Therefore, zookeeper and clickhouse need to be in the same network to communicate.
- https://blog.csdn.net/weixin_39912640/article/details/120392104
- Through docker network create --driver bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 zoonet
ck and zk are under the same subnet
explain:
Simulate the establishment of a three node CK cluster with three partitions on ck-node1, ck-node2 and ck-node3
ck-node1: 172.18.0.5: 8123 9000 9009
ck-node2: 172.18.0.6: 8124 9001 9010
ck-node3: 172.18.0.7: 8125 9002 9011
1. Create directory
Create three directories to store configuration files and data respectively
F:\clickhouse\ck-node1
F:\clickhouse\ck-node2
F:\clickhouse\ck-node3
2. Modify profile
- 1. Copy a copy of config.xml and users.xml from the stand-alone environment to ck-node1
2. Configure user files
Add users to the tag in users.xml
<zhao> <password_sha256_hex>4ba739741e69e5d4df6d596c94901c58f72c48caaf8711be6fead80e2fa54ddd</password_sha256_hex> <networks incl="networks" replace="replace"> <ip>::/0</ip> </networks> <profile>zhao</profile> <quota>zhao</quota> </zhao>
3. Edit config.xml
There are two configuration methods: one is remote directly in config.xml_ For the configuration of servers, one is to create metrika.xml in the conf folder
Then remote in the config.xml file_ Add the following configuration after the XML tag servers
<include_from>/etc/clickhouse-server/metrika.xml</include_from>
Found < remote_ Add the following content to the servers > tag
ck slice configuration
# Cluster name <jdcluster> # First slice <shard> <internal_replication>true</internal_replication> <replica> <!--Write node 1 here IP4 Address this address is not the address of the host, but just in docker Of the virtual subnet created in ip address- -> <host>172.18.0.5</host> <!--Write node 1 here tcp port--> <port>9000</port> <!--Write the account number of node 1 here--> <user>zhao</user> <!--Write the password corresponding to the account of node 1 here--> <password>zhao</password> </replica> </shard> # Second slice <shard> <internal_replication>true</internal_replication> <replica> <!--Write node 2 here IP4 address--> <host>172.18.0.6</host> <!--Write node 1 here tcp port--> <port>9000</port> <!--Write the account number of node 1 here--> <user>zhao</user> <!--Write the password corresponding to the account of node 1 here--> <password>zhao</password> </replica> </shard> # Third slice <shard> <internal_replication>true</internal_replication> <replica> <!--Write node 3 here IP4 address--> <host>172.18.0.7</host> <!--Write node 1 here tcp port--> <port>9000</port> <!--Write the account number of node 1 here--> <user>zhao</user> <!--Write the password corresponding to the account of node 1 here--> <password>zhao</password> </replica> </shard> </jdcluster>
zookeeper configuration
<zookeeper> <node> <!--This address needs and clickhouse of ip the address is docker Virtual subnet ip such zookeeper and clickhouse To communicate with each other--> <host>172.18.0.2</host> <port>2181</port> </node> <node> <host>172.18.0.3</host> <port>2181</port> </node> <node> <host>172.18.0.4</host> <port>2181</port> </node> </zookeeper>
network configuration
- 1,listen_host
<listen_host>0.0.0.0</listen_host>
- 2. network configuration
<networks> <ip>::/0</ip> </networks>
- 3. macros configuration
<macros> <cluster>jdcluster</cluster> <shard>01</shard> <replica>jdcluster-01-1</replica> </macros>
Other 2-piece configurations
Copy the config.xml and users.xml files under node1 to node2 and node3
And modify the macros configuration of config.xml
<macros> <cluster>jdcluster</cluster> <shard>02</shard> <replica>jdcluster-02-1</replica> </macros> <macros> <cluster>jdcluster</cluster> <shard>03</shard> <replica>jdcluster-03-1</replica> </macros>
- 5. Execute script
docker run -d -p 8125:8123 -p 9001:9000 -p 9019:9009 --name=ck_node1 --privileged --restart always --network zoonet --ip 172.18.0.5 --ulimit nofile=262144:262144 --volume F:/clickhouse/ck-node1/data:/var/lib/clickhouse:rw --volume F:/clickhouse/ck-node1/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/ck-node1/log:/var/log/clickhouse-server:rw 8a2fd1d0ecd3 docker run -d -p 8126:8123 -p 9002:9000 -p 9029:9009 --name=ck_node2 --privileged --restart always --network zoonet --ip 172.18.0.6 --ulimit nofile=262144:262144 --volume F:/clickhouse/ck-node2/data:/var/lib/clickhouse:rw --volume F:/clickhouse/ck-node2/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/ck-node2/log:/var/log/clickhouse-server:rw 8a2fd1d0ecd3 docker run -d -p 8127:8123 -p 9003:9000 -p 9039:9009 --name=ck_node3 --privileged --restart always --network zoonet --ip 172.18.0.7 --ulimit nofile=262144:262144 --volume F:/clickhouse/ck-node3/data:/var/lib/clickhouse:rw --volume F:/clickhouse/ck-node3/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/ck-node3/log:/var/log/clickhouse-server:rw 8a2fd1d0ecd3
-
5.1 execution script
C:\Users\admin>docker run -d -p 8125:8123 -p 9001:9000 -p 9019:9009 --name=ck_node1 --privileged --restart always --network zoonet --ip 172.18.0.5 --ulimit nofile=262144:262144 --volume F:/clickhouse/ck-node1/data:/var/lib/clickhouse:rw --volume F:/clickhouse/ck-node1/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/ck-node1/log:/var/log/clickhouse-server:rw 8a2fd1d0ecd3 152cb698415f6a146d1490c93aa6e923ce9df50a01ddb053861ae037fbc0f5b6 C:\Users\admin>docker run -d -p 8126:8123 -p 9002:9000 -p 9029:9009 --name=ck_node2 --privileged --restart always --network zoonet --ip 172.18.0.6 --ulimit nofile=262144:262144 --volume F:/clickhouse/ck-node2/data:/var/lib/clickhouse:rw --volume F:/clickhouse/ck-node2/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/ck-node2/log:/var/log/clickhouse-server:rw 8a2fd1d0ecd3 545f605f77a8bd9f1c814b7992f51ea2ac774a2ea58d2f37f63d87b7cff67931 C:\Users\admin>docker run -d -p 8127:8123 -p 9003:9000 -p 9039:9009 --name=ck_node3 --privileged --restart always --network zoonet --ip 172.18.0.7 --ulimit nofile=262144:262144 --volume F:/clickhouse/ck-node3/data:/var/lib/clickhouse:rw --volume F:/clickhouse/ck-node3/conf:/etc/clickhouse-server:rw --volume F:/clickhouse/ck-node3/log:/var/log/clickhouse-server:rw 8a2fd1d0ecd3 565c1596829436e09d1b4a379080ab924f18143a05f29057ebc49a4d2bb5ffe0
-
5.2 viewing clickhouse status
C:\Users\admin>docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 565c15968294 8a2fd1d0ecd3 "/entrypoint.sh" 9 seconds ago Up 6 seconds 0.0.0.0:8127->8123/tcp, :::8127->8123/tcp, 0.0.0.0:9003->9000/tcp, :::9003->9000/tcp, 0.0.0.0:9039->9009/tcp, :::9039->9009/tcp ck_node3 545f605f77a8 8a2fd1d0ecd3 "/entrypoint.sh" 16 seconds ago Up 14 seconds 0.0.0.0:8126->8123/tcp, :::8126->8123/tcp, 0.0.0.0:9002->9000/tcp, :::9002->9000/tcp, 0.0.0.0:9029->9009/tcp, :::9029->9009/tcp ck_node2 152cb698415f 8a2fd1d0ecd3 "/entrypoint.sh" 24 seconds ago Up 21 seconds 0.0.0.0:8125->8123/tcp, :::8125->8123/tcp, 0.0.0.0:9001->9000/tcp, :::9001->9000/tcp, 0.0.0.0:9019->9009/tcp, :::9019->9009/tcp ck_node1 a2a3e46643ac yandex/clickhouse-server "/entrypoint.sh" 2 hours ago Up 2 hours 0.0.0.0:8123->8123/tcp, :::8123->81
-
5.3 parameter interpretation
docker run # Background operation -d #Port mapping -p 8125:8123 -p 9001:9000 -p 9019:9009 # Container name --name=ck_node1 # Prevent error reporting --privileged # When docker restarts, the container will also restart automatically --restart always # Manually specify an ip for each node using a custom subnet ip --network zoonet --ip 172.18.0.5 --ulimit nofile=262144:262144 # Data file mapping --volume F:/clickhouse/ck-node1/data:/var/lib/clickhouse:rw #Profile mapping --volume F:/clickhouse/ck-node1/conf:/etc/clickhouse-server:rw # Log mapping --volume F:/clickhouse/ck-node1/log:/var/log/clickhouse-server:rw # Mirror id 8a2fd1d0ecd3
6. Client connection
The client has successfully connected three clickhouse s
7. Create table
7.1 create a database named jdcluster on port 812581268127
create database jdcluster;
7.2 create the jdcluster.test001 table on the connection with port number 8125
create table jdcluster.test001 on cluster jdcluster ( membership_id int, -- comment 'member id', membership_uid String, -- comment 'member uid', insert_date Date DEFAULT toDate(now()) -- Date of data insertion ) ENGINE = ReplicatedMergeTree('/clickhouse/tables//test001', '') order by membership_uid
Parameter interpretation:
# /Clickhouse / tables / the root directory of Clickhouse on zookeeper # get the node information in the configuration file # test001 is generally the same as the table name, in the position of zookeeper # replica information is also the value in the configuration file <macros> <cluster>jdcluster</cluster> <shard>03</shard> <replica>jdcluster-03-1</replica> </macros> ReplicatedMergeTree('/clickhouse/tables//test001', '')
After the creation is successful, you can see jdcluster.test001 on 81268127
7.3 create a distributed table (any node can execute it once)
CREATE TABLE jdcluster.test001_all ON CLUSTER jdcluster AS jdcluster.test001 ENGINE = Distributed('jdcluster','jdcluster','test001',rand())
7.4 inserting data using distributed tables
insert data
INSERT INTO jdcluster.test001_all (membership_id, membership_uid, insert_date) VALUES(0, 'a', toDate(now())), (0, 'a', toDate(now())), (2, 'a', toDate(now())), (3, 'a', toDate(now())), (4, 'a', toDate(now())), (5, 'd', toDate(now()));
View data
You can see that the data has been randomly allocated to three partitions, and the clickhouse cluster partition is completed
The copy has not been demonstrated yet
https://www.cnblogs.com/qa-freeroad/p/14398818.html