Step by step to complete Kubernetes binary deployment -- flannel network configuration (single node)

Step by step to complete the binary deployment of Kubernetes (II) - flannel network configuration (single node) Preface ...
Step by step to complete the binary deployment of Kubernetes (II) - flannel network configuration (single node) Preface

The demonstration of etcd cluster process of single node Kubernetes binary deployment is set up in the front. This paper will continue to deploy Kubernetes single node cluster in combination with the previous article to complete the flannel network configuration of external communication of the cluster.

Environmental preparation

First, two node nodes install docker CE. You can view my previous articles about docker deployment: Unveiling docker: basic theory combing and installation process demonstration , I installed it directly using shell script. Note that the image acceleration is better to use the address I applied for in Alibaba cloud or other places.

Last time, I suspended the virtual machine in the experimental environment. At this time, it is recommended to check whether the network can access the external network, and then check the health status of the etcd cluster of the three nodes. The three environments here have been demonstrated by node01 as an example

[root@node01 opt]# ping www.baidu.com #Test and verify whether the docker service is enabled on two node nodes [root@node01 opt]# systemctl status docker.service #Health check [root@node01 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" cluster-health member a25c294d3a391c7c is healthy: got healthy result from https://192.168.0.128:2379 member b2db359ffad36ee5 is healthy: got healthy result from https://192.168.0.129:2379 member eddae83baed564ba is healthy: got healthy result from https://192.168.0.130:2379 cluster is healthy

The results show that cluster is healthy, which indicates that the current etcd cluster is healthy

Configure the flannel network

On the master node: write the allocated subnet segment to ETCD for flannel to use

#Write operation [root@master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.131:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' #Execution result display { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} #View command actions [root@master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" get /coreos.com/network/config #Execution result display { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

To deploy flannel on a node, you need a software package first. The configuration of the two nodes is the same. Here, take node01 as an example:
Package resources:
Link: https://pan.baidu.com/s/1etCPIGRQ1ZUxcNaCxChaCQ
Extraction code: 65ml

[root@node01 ~]# ls anaconda-ks.cfg initial-setup-ks.cfg Template picture download desktop flannel-v0.10.0-linux-amd64.tar.gz [root@node01 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz flanneld mk-docker-opts.sh README.md #The above is the extracted file of the software package

We create the working directory of Kubernetes on two nodes and move the two files to the bin directory

oot@node01 ~]# mkdir /opt/kubernetes/ -p [root@node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

You need to write configuration files and startup script files. Here, you can use shell script

vim flannel.sh

#!/bin/bash ETCD_ENDPOINTS=$ cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=$ \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld

Execute script

[root@node01 ~]# bash flannel.sh https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 #The results are as follows: Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

At this time, configure docker to connect to flannel

#Edit docker service startup file [root@node01 ~]# vim /usr/lib/systemd/system/docker.service #Setting up environment files 14 EnvironmentFile=/run/flannel/subnet.env #Add $docker? Network? Options parameter 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

Take a look at the subnet.env file

[root@node01 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.56.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.56.1/24 --ip-masq=false --mtu=1450" #Where -- bip represents the subnet at startup

Restart docker service

[root@node01 ~]# systemctl daemon-reload [root@node01 ~]# systemctl restart docker

View the flannel network

[root@node01 ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.56.1 netmask 255.255.255.0 broadcast 172.17.56.255 ether 02:42:fb:e2:37:f9 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.129 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe1d:9287 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:1d:92:87 txqueuelen 1000 (Ethernet) RX packets 1068818 bytes 1195325321 (1.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 461088 bytes 43526519 (41.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #Whether the network segment of flannel is consistent with the previous subnet.env flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.56.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::74a5:98ff:fe3f:4bf7 prefixlen 64 scopeid 0x20<link> ether 76:a5:98:3f:4b:f7 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0

The network segment on my node02 node is 172.17.91.0. Test ping the gateway of the network segment on node01

[root@node01 ~]# ping 172.17.91.1 PING 172.17.91.1 (172.17.91.1) 56(84) bytes of data. 64 bytes from 172.17.91.1: icmp_seq=1 ttl=64 time=0.436 ms 64 bytes from 172.17.91.1: icmp_seq=2 ttl=64 time=0.343 ms 64 bytes from 172.17.91.1: icmp_seq=3 ttl=64 time=1.19 ms 64 bytes from 172.17.91.1: icmp_seq=4 ttl=64 time=0.439 ms ^C

Being able to ping the communication proves that flannel plays a routing role

At this time, we start a container on two nodes to test whether the network communication between the two containers is normal

[root@node01 ~]# docker run -it centos:7 /bin/bash #Direct access to container [root@8bf87d48390f /]# yum install -y net-tools [root@8bf87d48390f /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.56.2 netmask 255.255.255.0 broadcast 172.17.56.255 ether 02:42:ac:11:38:02 txqueuelen 0 (Ethernet) RX packets 9511 bytes 7631125 (7.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4561 bytes 249617 (243.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #Second container address [root@234aac7fad6c /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.91.2 netmask 255.255.255.0 broadcast 172.17.91.255 ether 02:42:ac:11:5b:02 txqueuelen 0 (Ethernet) RX packets 9456 bytes 7629047 (7.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4802 bytes 262568 (256.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Test whether two containers can ping each other

[root@8bf87d48390f /]# ping 172.17.91.2 PING 172.17.91.2 (172.17.91.2) 56(84) bytes of data. 64 bytes from 172.17.91.2: icmp_seq=1 ttl=62 time=0.555 ms 64 bytes from 172.17.91.2: icmp_seq=2 ttl=62 time=0.361 ms 64 bytes from 172.17.91.2: icmp_seq=3 ttl=62 time=0.435 ms

The test can ping the general rule that the nodes can communicate with each other at this time

5 May 2020, 11:16 | Views: 10056

Add new comment

For adding a comment, please log in
or create account

0 comments