ETCD database deployment and flannel network component installation of kubernetes (K8S) cluster deployment

1, Introduction to single master cluster deployment

Installation package used to build k8s cluster: (my version of installation package)

Set up a node server: (three nodes)

Master: 192.168.66.130/24

Software to be installed: Kube apiserver Kube Controller Manager Kube scheduler etcd

Node01: 192.168.66.132/24

Software to be installed: kubelet Kube proxy docker flannel etcd

Node02: 192.168.66.133/24

Software to be installed: kubelet Kube proxy docker flannel etcd

2, Environmental preparation

1. Static IP address corresponding to each virtual machine configuration

vi /etc/sysconfig/network-scripts/ifcfg-ens33

2. Prevent IP address change of virtual machine from restarting

systemctl stop NetworkManager
systemctl enable NetworkManager

service network restart   #Restart the network

ping www.baidu.com    #To achieve network communication

3. Do not turn off the firewall.

systemctl start firewalld   #Turn on the firewall
iptables -F    #Clear firewall rules
setenforce 0   #Turn off core protection

3, Deploy ETCD cluster

The communication between etcds is encrypted, so to create a CA certificate, TLS is used to encrypt the communication.

3.1 install cfssl, a certificate making tool

master node:

[root@localhost ~]# mkdir k8s
[root@localhost ~]# cd k8s/ 

//Write the script cfssl.sh, download the certificate making tool cfssl from the official website, and put it directly in the directory / usr/local/bin for the convenience of system identification. Finally, add the execution authority to the tool
[root@localhost k8s]# vi cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

 #Execute the script to wait for the download software to be installed
[root@localhost k8s]# bash cfssl.sh  

[root@localhost k8s]# ls /usr/local/bin/ 
#You can see three tools for making certificates
cfssl  cfssl-certinfo  cfssljson

#cfssl: certificate generation tool
#Cfssl certinfo: View Certificate Information
#cfssljson: generate certificate by passing in json file

3.2. Making CA certificate

[root@localhost k8s]# MKDIR etcd Cert / / the location where all certificates are stored
[root@localhost k8s]# MV etcd cert.sh etcd Cert / / the material to generate the certificate
[root@localhost k8s]# cd etcd-cert/

1. Create a profile to generate a ca certificate

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]  
      } 
    }
  }
}
EOF

2. Create a signing certificate for the ca certificate

cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

3. CA certificate is generated by Ca signature certificate, and ca-key.pem ca.pem is obtained

cfssl gencert -initca ca-csr.json | cfssljson -bare ca –

4. Specify the communication verification between three nodes of etcd - server-csr.json is required

// Change the IP address to its own node
cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.66.130",
    "192.168.66.132",
    "192.168.66.133"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

5. Using ca-key.pem, ca.pem and server signature certificate to generate ETCD certificate server-key.pem and server.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

3.3 use certificate and etcd script to build etcd cluster

Upload a script etcd.sh that generates the ETCD configuration file to the directory / root/k8s

[root@localhost k8s]# vim etcd.sh 
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd
# Create profile template for node
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# Create a startup script template for a node
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
# Restart the service and set the start-up mode
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

Upload the downloaded three software packages to k8s directory

First extract the etcd package to the current directory, and then create the working directory of the etcd cluster

[root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz / / unzip

[root@localhost k8s]# ls etcd-v3.3.10-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
#Later use the etcd and etcdctl application commands in the source package

[root@localhost k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p / / configuration file, command file, certificate
[root@localhost k8s]# ls /opt/etcd/
bin  cfg  ssl

1. Put the etcd and etcdctl execution files in / opt/etcd/bin/

[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin

2. Copy the certificate to / opt/etcd/ssl /

[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/

Execute etcd.sh script to generate configuration script and service startup script of etcd cluster, enter stuck state and wait for other nodes to join

// Pay attention to IP address modification
[root@localhost k8s]#  bash etcd.sh etcd01 192.168.66.130 etcd02=https://192.168.66.132:2380,etcd03=https://192.168.66.133:2380

//Using another session window, you will find that the etcd process has been opened
[root@localhost ~]# ps aux | grep etcd

3.4. node nodes join ETCD cluster (to realize internal communication)

1. Copy the certificate on the master node to other nodes

[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.66.132:/opt

[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.66.133:/opt

2. Copy the startup script of the master node to other nodes

[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.66.132:/usr/lib/systemd/system

[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.66.133:/usr/lib/systemd/system

3. Modify the configuration file on node01 node

[root@localhost system]# cd /opt/etcd/cfg/
[root@localhost cfg]# ls
etcd
[root@localhost cfg]# vim etcd

4. Modify the configuration file on node02 node

[root@localhost system]# cd /opt/etcd/cfg/
[root@localhost cfg]# ls
etcd
[root@localhost cfg]# vim etcd

5. Enter bash in the master node and wait for the node to join the cluster

[root@localhost k8s]# bash etcd.sh etcd01 192.168.66.130 etcd02=https://192.168.66.132:2380,etcd03=https://192.168.66.133:2380

6. Start node01 and node02 nodes quickly at the same time

[root@localhost ~]# systemctl start etcd
[root@localhost ~]# systemctl status etcd

3.5 check the cluster status

Execute on the master node. Note: execute the command to check the cluster under etcd Cert /

[root@localhost k8s]# cd etcd-cert/
[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379" cluster-health

4, docker engine deployment

All node nodes must deploy Docker engine. For Docker installation and deployment, please refer to my previous blog: [Docker deployment and image acceleration, network optimization]

5, Deploy the flannel network components

5.1. Establish ETCD cluster and external communication

1. On the master node, write the allocated subnet segment to ETCD for flannel to use

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

View information written

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379" get /coreos.com/network/config

2. Two node nodes: upload the software package flannel and extract it to the host directory.

//Copy to all node nodes (just deploy on node nodes)
[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.66.132:/root
[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.66.133:/root

//Decompression of all node operations
[root@localhost ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md

3. Create k8s working directory on two node nodes

[root@localhost ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

Upload the script flannel.sh that can generate the configuration file and startup file.

[root@localhost ~]# vim flannel.sh
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

4. Two node nodes enable the flannel network function

[root@localhost ~]# bash flannel.sh https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379

Check whether the network status is running

[root@localhost ~]# systemctl status flanneld

5.2. Configure Docker to connect to flannel network

Two node nodes: modify the configuration file of docker

[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
//Modify and add two places:
EnvironmentFile=/run/flannel/subnet.env
$DOCKER_NETWORK_OPTIONS

View the subnet segments assigned by the flanne network

[root@localhost ~]# cat /run/flannel/subnet.env 

Restart docker service

[root@localhost ~]# systemctl daemon-reload 
[root@localhost ~]# systemctl restart docker

5.3 verification of flannel network interworking

1. The two node nodes create and enter the centos:7 container automatically.

[root@localhost ~]# docker run -it centos:7 /bin/bash

[root@a57795cdc6ef /]# yum install net-tools -y
#You can use ifconfig command after installation

2. ifconfig checks the IP address and uses ping command to check whether the network is interworking

After verification, it can communicate with each other, and the flannel network is completed!

Tags: network Docker SSL JSON

Posted on Tue, 05 May 2020 13:36:36 -0400 by gitosh