Attachment 018.K3S-ETCD high availability deployment

I. overview of K3S

1.1 introduction to K3S

K3S is a lightweight Kubernetes distribution. Easy to install, low memory consumption, all binaries less than 40 MB.
  • For:
  • Edge computing edge
  • Internet of things IoT
  • CI
  • ARM

1.2 characteristics of K3S

k3s is a fully compatible Kubernetes release with the following changes:
  • Remove obsolete features, Alpha features, non default features that are no longer available in most Kubernetes clusters.
  • Delete built-in plug-ins (such as cloud provider plug-ins and storage plug-ins), which can be replaced by external plug-ins.
  • Add SQLite3 as the default data store. etcd3 is still available, but not the default.
  • Included in a simple startup program, you can handle complex TLS and other options.
  • There is little operating system dependency (only a healthy kernel and cgroup mount are required). Dependencies required for the k3s package:
    • containerd
    • Flannel
    • CoreDNS
    • CNI
    • Host system services (iptables, socat, etc)

1.3 K3S architecture

The server node is defined as the host (bare or virtual machine) running the k3s server command. The worker node is defined as the host running the k3s agent command.
The common K3S high availability architecture is as follows:
  • Two or more server nodes;
  • An external data store.

1.4 worker node registration

The worker node is registered through the Websocket connection initiated when the k3s agent is started.
The worker node registers with the server with the node cluster key and the random password stored in / etc/rancher/node/password. Server will store the password in the / var / lib / lancher / k3s / server / cred / node passwd path of a single node. Any subsequent operation must use the same password. If you delete the worker node directory / etc/rancher/node, you should recreate the password file for the worker node, or remove the node from the server.
By using the -- with node ID flag to start K3s server or agent, you can add a unique node ID to hostname.

II. K3S deployment plan

2.1 node requirements

All nodes cannot have the same host name.
If the node has the same hostname, you need to modify the hostname before running K3S. Or pass a unique host name through the -- node name or $K3S \.
No load minimum configuration: RAM: 512 MB, CPU: 1C.
k3s server needs port 6443 to be accessible by nodes. These nodes need to be able to access each other through port UDP 8472 to build the Flannel VXLAN network.
If you do not use Flannel VXLAN and provide your own custom CNI, k3s does not need to release UDP 8472 port. K3s uses a reverse tunnel so that the worker can establish an outbound connection with the server, and all kubelet traffic is communicated through the tunnel.
If you want to use a metrics server, you need to release port 10250 on each node.

2.2 node planning

High availability architecture 1: etcd is mixed with Master node components.

Node hostname
IP
type
Running services
master01
172.24.12.11
k3s master node
containerd,etcd,kube-apiserver,kube-scheduler,kube-controller-manager,kubectl,flannel
master02
172.24.12.12
k3s master node
containerd,etcd,kube-apiserver,kube-scheduler,kube-controller-manager,kubectl,flannel
master03
172.24.12.13
k3s master node
containerd,etcd,kube-apiserver,kube-scheduler,kube-controller-manager,kubectl,flannel
worker01
172.24.12.21
k3s worker node
containerd,kubelet,proxy,flannel
worker02
172.24.12.22
k3s worker node
containerd,kubelet,proxy,flannel
worker03
172.24.12.23
k3s worker node
containerd,kubelet,proxy,flannel

III. K3S deployment preparation

3.1 variable parameter preparation

[root@master01 ~]# vi environment.sh
  1 #!/bin/sh
  2 #****************************************************************#
  3 # ScriptName: environment.sh
  4 # Author: xhy
  5 # Create Date: 2020-05-13 12:21
  6 # Modify Author: xhy
  7 # Modify Date: 2020-05-13 12:21
  8 # Version: 
  9 #***************************************************************#
 10 
 11 # Cluster MASTER machine IP array
 12 export MASTER_IPS=(172.24.12.11 172.24.12.12 172.24.12.13)
 13 
 14 # Host name array corresponding to cluster MASTER IP
 15 export MASTER_NAMES=(master01 master02 master03)
 16 
 17 # Cluster NODE machine IP array
 18 export NODE_IPS=(172.24.12.21 172.24.12.22 172.24.12.23)
 19 
 20 # Host name array corresponding to cluster NODE IP
 21 export NODE_NAMES=(worker01 worker02 worker03)
 22 
 23 # Cluster all machine IP array
 24 export ALL_IPS=(172.24.12.11 172.24.12.12 172.24.12.13 172.24.12.21 172.24.12.22 172.24.12.23)
 25 
 26 # Array of host names corresponding to all IP addresses in the cluster
 27 export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)
 28 
 29 # etcd cluster service address list
 30 export ETCD_ENDPOINTS="https://172.24.12.11:2379,https://172.24.12.12:2379,https://172.24.12.13:2379"
 31 
 32 # The IP and port of communication between etcd clusters
 33 export ETCD_NODES="master01=https://172.24.12.11:2380,master02=https://172.24.12.12:2380,master03=https://172.24.12.13:2380"
 34 
 35 # Name of interconnection network interface between nodes
 36 export IFACE="eth0"
 37 
 38 # etcd data directory
 39 export ETCD_DATA_DIR="/data/k3s/etcd/data"
 40 
 41 # etcd WAL directory, SSD partition or different partition from etcd? Data? Dir is recommended
 42 export ETCD_WAL_DIR="/data/k3s/etcd/wal"

3.2 relevant optimization

[root@master01 ~]# vi k3sinit.sh
  1 #!/bin/sh
  2 #****************************************************************#
  3 # ScriptName: k3sinit.sh
  4 # Author: xhy
  5 # Create Date: 2020-05-13 18:56
  6 # Modify Author: xhy
  7 # Modify Date: 2020-05-13 18:56
  8 # Version: 
  9 #***************************************************************#
 10 # Initialize the machine. This needs to be executed on every machine.
 11 
 12 # Disable the SELinux.
 13 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
 14 
 15 # Turn off and disable the firewalld.
 16 systemctl stop firewalld
 17 systemctl disable firewalld
 18 
 19 # Modify related kernel parameters & Disable the swap.
 20 cat > /etc/sysctl.d/k3s.conf << EOF
 21 net.ipv4.ip_forward = 1
 22 net.bridge.bridge-nf-call-ip6tables = 1
 23 net.bridge.bridge-nf-call-iptables = 1
 24 net.ipv4.tcp_tw_recycle = 0
 25 vm.swappiness = 0
 26 vm.overcommit_memory = 1
 27 vm.panic_on_oom = 0
 28 net.ipv6.conf.all.disable_ipv6 = 1
 29 EOF
 30 sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null
 31 swapoff -a
 32 sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
 33 modprobe br_netfilter
 34 
 35 # Add ipvs modules
 36 cat > /etc/sysconfig/modules/ipvs.modules <<EOF
 37 #!/bin/bash
 38 modprobe -- ip_vs
 39 modprobe -- ip_vs_rr
 40 modprobe -- ip_vs_wrr
 41 modprobe -- ip_vs_sh
 42 modprobe -- nf_conntrack_ipv4
 43 EOF
 44 chmod 755 /etc/sysconfig/modules/ipvs.modules
 45 bash /etc/sysconfig/modules/ipvs.modules
 46 
 47 # Install rpm
 48 yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

3.3 configure the secret key

  1 [root@master01 ~]# cat > etc/hosts << EOF
  2 172.24.12.11 master01
  3 172.24.12.12 master02
  4 172.24.12.13 master03
  5 172.24.12.21 worker01
  6 172.24.12.22 worker02
  7 172.24.12.23 worker03
  8 EOF
In order to facilitate remote distribution of files and command execution, this experiment configures the ssh trust relationship between the master node and other nodes.
  1 [root@master01 ~]# source /root/environment.sh
  2 [root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  3 do
  4 echo ">>> ${all_ip}"
  5 ssh-copy-id -i ~/.ssh/id_rsa.pub root@${all_ip}
  6 scp /etc/hosts root@${all_ip}:/etc/hosts
  7 scp environment.sh root@${all_ip}:/root/
  8 scp k3sinit.sh root@${all_ip}:/root/
  9 ssh root@${all_ip} "chmod +x /root/environment.sh"
 10 ssh root@${all_ip} "chmod +x /root/k3sinit.sh"
 11 ssh root@${all_ip} "bash /root/k3sinit.sh &"
 12 done
Tip: this operation only needs to be performed on the master01 node.

IV. custom certificate

4.1 install cfssl

  1 [root@master01 ~]# curl -L https://Pkg.cfssl.org/r1.2/cfssl ﹣ linux-amd64 - O / usr / local / bin / cfssl ᦇ download cfssl software
  2 [root@master01 ~]# chmod u+x /usr/local/bin/cfssl
  3 [root@master01 ~]# curl -L https://Pkg.cfssl.org/r1.2/cfssl json ﹐ linux-amd64 - O / usr / local / bin / cfssljson ﹐ download the json template
  4 [root@master01 ~]# chmod u+x /usr/local/bin/cfssljson
  5 [root@master01 ~]# curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
  6 [root@master01 ~]# chmod u+x /usr/local/bin/cfssl-certinfo
  7 [root@master01 ~]# mkdir /opt/k3s/work
  8 [root@master01 ~]# cd /opt/k3s/work
  9 [root@master01 cert]# cfssl print-defaults config > config.json
 10 [root@master01 cert]# cfssl print-defaults csr > csr.json	#Create template configuration json file

4.2 create root certificate

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# cp config.json ca-config.json		#Copy a profile as a CA
  3 [root@master01 work]# cat > ca-config.json <<EOF
  4 {
  5     "signing": {
  6         "default": {
  7             "expiry": "168h"
  8         },
  9         "profiles": {
 10             "kubernetes": {
 11                 "expiry": "87600h",
 12                 "usages": [
 13                     "signing",
 14                     "key encipherment",
 15                     "server auth",
 16                     "client auth"
 17                 ]
 18             }
 19         }
 20     }
 21 }
 22 EOF
Field explanation:
config.json: multiple profiles can be defined to specify different expiration times, usage scenarios and other parameters; a profile will be used later when signing the certificate;
  • signing: indicates that the certificate can be used to sign other certificates; CA=TRUE in the generated ca.pem certificate;
  • server auth: indicates that the client can use the CA to verify the certificate provided by the server;
  • client auth: indicates that the server can use the CA to verify the certificate provided by the client.
  1 [root@master01 work]# cp csr.json ca-csr.json	#Copy a certificate signing request file as CA
  2 [root@master01 work]# cat > ca-csr.json <<EOF
  3 {
  4     "CN": "kubernetes",
  5     "key": {
  6         "algo": "rsa",
  7         "size": 2048
  8     },
  9     "names": [
 10         {
 11             "C": "CN",
 12             "ST": "Shanghai",
 13             "L": "Shanghai",
 14             "O": "k3s",
 15             "OU": "System"
 16         }
 17     ]
 18 }
 19 EOF
Field explanation:
  • CN: Common Name, Kube apiserver extracts this field from the certificate as the requested user name; the browser uses this field to verify whether the website is legal;
  • C: country;
  • ST: state;
  • L: city;
  • O: Organization, Kube API server extracts this field from the certificate as the group of the requesting user;
  • OU: organization unit.
[root @ master01 work] ා cfssl gencert - initca-csr.json | cfssljson - bare Ca ා generate CA key (CA key. PEM) and certificate (ca.pem)
Tip: after the certificate is generated, the Kubernetes cluster needs two-way TLS authentication, then ca-key.pem and ca.pem can be copied to the / etc/kubernetes/ssl directory of all the machines to be deployed.
Refer to appendix 008.Kubernetes TLS certificate introduction and creation for more TLS Certificate creation methods.

4.3 certificate of distribution root

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01 work]# for all_ip in ${ALL_IPS[@]}
  4   do
  5     echo ">>> ${all_ip}"
  6     ssh root@${all_ip} "mkdir -p /etc/kubernetes/cert"
  7     scp ca*.pem ca-config.json root@${all_ip}:/etc/kubernetes/cert
  8   done

V. installation of ETCD

5.1 installing ETCD

  1 [root@master01 ~]# wget https://github.com/coreos/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz
  2 [root@master01 ~]# tar -xvf etcd-v3.4.7-linux-amd64.tar.gz

5.2 distribution of ETCD

  1 [root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  2   do
  3     echo ">>> ${master_ip}"
  4     scp etcd-v3.4.7-linux-amd64/etcd* root@${master_ip}:/usr/local/bin
  5     ssh root@${master_ip} "chmod +x /usr/local/bin/*"
  6     ssh root@${master_ip} "mkdir -p /data/k3s/etcd/data"
  7     ssh root@${master_ip} "mkdir -p /data/k3s/etcd/wal"
  8   done

5.3 create etcd certificate and key

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 cert]# cat > etcd-csr.json <<EOF
  3 {
  4     "CN": "etcd",
  5     "hosts": [
  6         "127.0.0.1",
  7         "localhost",
  8         "172.24.12.11",
  9         "172.24.12.12",
 10         "172.24.12.13"
 11     ],
 12     "key": {
 13         "algo": "rsa",
 14         "size": 2048
 15     },
 16     "names": [
 17         {
 18             "C": "CN",
 19             "ST": "Shanghai",
 20             "L": "Shanghai",
 21             "O": "k3s",
 22             "OU": "System"
 23         }
 24     ]
 25 }
 26 EOF
 27 #Create a CA certificate request file for etcd
Explanation:
The hosts field specifies the etcd node IP or domain name list authorized to use the certificate, in which all three node IP of the etcd cluster need to be listed.
  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# cfssl gencert -ca=/opt/k3s/work/ca.pem \
  3 -ca-key=/opt/k3s/work/ca-key.pem -config=/opt/k3s/work/ca-config.json \
  4 -profile=kubernetes etcd-csr.json | cfssljson -bare etcd	#Generate CA key (CA key. PEM) and certificate (ca.pem)

5.4 distributing certificates and private keys

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  4   do
  5     echo ">>> ${master_ip}"
  6     ssh root@${master_ip} "mkdir -p /etc/etcd/cert"
  7     scp etcd*.pem root@${master_ip}:/etc/etcd/cert/
  8   done

5.5 create etcd's systemd

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01 work]# cat > etcd.service.template <<EOF
  4 [Unit]
  5 Description=Etcd Server
  6 After=network.target
  7 After=network-online.target
  8 Wants=network-online.target
  9 Documentation=https://github.com/coreos
 10 
 11 [Service]
 12 Type=notify
 13 WorkingDirectory=${ETCD_DATA_DIR}
 14 ExecStart=/usr/local/bin/etcd \\
 15   --data-dir=${ETCD_DATA_DIR} \\
 16   --wal-dir=${ETCD_WAL_DIR} \\
 17   --name=##MASTER_NAME## \\
 18   --cert-file=/etc/etcd/cert/etcd.pem \\
 19   --key-file=/etc/etcd/cert/etcd-key.pem \\
 20   --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
 21   --peer-cert-file=/etc/etcd/cert/etcd.pem \\
 22   --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
 23   --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
 24   --peer-client-cert-auth \\
 25   --client-cert-auth \\
 26   --listen-peer-urls=https://##MASTER_IP##:2380 \\
 27   --initial-advertise-peer-urls=https://##MASTER_IP##:2380 \\
 28   --listen-client-urls=https://##MASTER_IP##:2379,http://127.0.0.1:2379 \\
 29   --advertise-client-urls=https://##MASTER_IP##:2379 \\
 30   --initial-cluster-token=etcd-cluster-0 \\
 31   --initial-cluster=${ETCD_NODES} \\
 32   --initial-cluster-state=new \\
 33   --auto-compaction-mode=periodic \\
 34   --auto-compaction-retention=1 \\
 35   --max-request-bytes=33554432 \\
 36   --quota-backend-bytes=6442450944 \\
 37   --heartbeat-interval=250 \\
 38   --election-timeout=2000
 39 Restart=on-failure
 40 RestartSec=5
 41 LimitNOFILE=65536
 42 
 43 [Install]
 44 WantedBy=multi-user.target
 45 EOF
Explanation:
Working directory, -- data dir: Specifies that the working directory and data directory are ${ETCD_DATA_DIR}, which needs to be created before starting the service;
--Wal dir: Specifies the wal directory. In order to improve performance, SSD s or disks different from -- data dir are generally used;
--Name: Specifies the node name. When the -- initial cluster state value is new, the parameter value of - name must be in the -- initial cluster list;
--CERT file, -- key file: the certificate and private key used by etcd server when communicating with client;
--Trusted CA file: the CA certificate signing the client certificate, which is used to verify the client certificate;
--Peer cert file, -- peer key file: certificate and private key used by etcd to communicate with peer;
--Peer trusted CA file: the CA certificate signing the peer certificate, which is used to verify the peer certificate.

5.5 modify the corresponding address of etcd system D

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01 work]# for (( i=0; i < 3; i++ ))
  4   do
  5     sed -e "s/##MASTER_NAME##/${MASTER_NAMES[i]}/" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/" etcd.service.template > etcd-${MASTER_IPS[i]}.service
  6   done

5.6 distribution of etcd systemd

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01  work]# for master_ip in ${MASTER_IPS[@]}
  4   do
  5     echo ">>> ${master_ip}"
  6     scp etcd-${master_ip}.service root@${master_ip}:/etc/systemd/system/etcd.service
  7   done

5.7 starting ETCD

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01  work]# for master_ip in ${MASTER_IPS[@]}
  4   do
  5     echo ">>> ${master_ip}"
  6     ssh root@${master_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
  7     ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
  8   done

5.8 check ETCD startup

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  4   do
  5     echo ">>> ${master_ip}"
  6     ssh root@${master_ip} "systemctl status etcd|grep Active"
  7   done

5.9 verify service status

  1 [root@master01 ~]# cd /opt/k3s/work
  2 [root@master01 work]# source /root/environment.sh
  3 [root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  4   do
  5     echo ">>> ${master_ip}"
  6     ETCDCTL_API=3 /usr/local/bin/etcdctl \
  7     --endpoints=https://${master_ip}:2379 \
  8     --cacert=/etc/kubernetes/cert/ca.pem \
  9     --cert=/etc/etcd/cert/etcd.pem \
 10     --key=/etc/etcd/cert/etcd-key.pem endpoint health
 11   done

5.10 view the current leader of ETCD

  1 [root@master01 ~]# source /root/environment.sh
  2 [root@master01 ~]# ETCDCTL_API=3 /usr/local/bin/etcdctl \
  3   -w table --cacert=/etc/kubernetes/cert/ca.pem \
  4   --cert=/etc/etcd/cert/etcd.pem \
  5   --key=/etc/etcd/cert/etcd-key.pem \
  6   --endpoints=${ETCD_ENDPOINTS} endpoint status
As shown above, the current leader of ETCD cluster is 172.24.12.12.

Vi. installation of K3S server

6.1 script installation

  1 [root@master01 ~]# curl -sfL https://docs.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn \
  2 sh -s - server --write-kubeconfig ~/.kube/config \
  3 --datastore-endpoint='https://172.24.12.11:2379,master02=https://172.24.12.12:2379,master03=https://172.24.12.13:2379' --datastore-cafile=/etc/kubernetes/cert/ca.pem \
  4 --datastore-certfile=/etc/etcd/cert/etcd.pem \
  5 --datastore-keyfile=/etc/etcd/cert/etcd-key.pem \
  6 --token=x120952576 \
  7 --tls-san=172.24.12.254
  8 [root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
Tip: the above needs to be executed in all master nodes, / var/lib/rancher/k3s/server/manifests / is the static pud path in k3s.
--The effect of write Kube config ~ /. Kube / config is to write the configuration file to the default location of k8s instead of the default location of k3s / etc/rancher/k3s/k3s.yaml. The latter will cause istio and helm to need additional settings or fail to run.
Explanation: Script Installation configurable parameters:
  • Install ﹣ K3S ﹣ error: the user in China can set install ﹣ K3S ﹣ error = CN to accelerate the download of K3S binary files, or download the binary files manually.
  • Install? K3S? Skip? Download: if set to true, K3s hash files or K3S binaries will not be downloaded.
  • Install ﹣ k3s ﹣ symlink: if set to skip, soft links will not be created, and force will force override. If there is no command in path, the default value is symbolic links.
  • Install? K3s? Skip? Start: if set to true, the k3s service will not be started automatically.
  • Install? K3s? Version: a version of k3s that can be downloaded from github. If not specified, an attempt will be made to download the latest version.
  • Install? K3S? Bin? Dir: the directory where K3S binaries, soft links, and uninstall scripts are installed, / usr/local/bin is used as the default directory.
  • Install? K3s? Bin? Dir? Read? Only: if it is set to true, the file will not be written to install? K3s? Bin? Dir. If forced writing is required, install? K3s? Skip? Download = true.
  • Install? K3s? systemd? Dir: the directory where the systemd service and environment variable files are installed, / etc/systemd/system is used as the default directory.
  • Install ﹣ k3s ﹣ EXEC: when the install ﹣ k3s ﹣ EXEC variable is not specified, or the k3s ﹣ URL variable is set, or the server execution command is not added to the install ﹣ k3s ﹣ EXEC variable, k3s will run as an agent by default. Otherwise, it will run in the server role by default. Finally, the systemd command is parsed as a combination of EXEC command and script parameters ($@).
  • Install? K3s? Name: if not specified, the name of the systemd service created with the K3s exec command will be used by default. If specified, the name is prefixed with k3s -.
  • Install? K3s? Type: the systemd service type to be created. If not specified, the K3s exec command will be used by default.
Tip: the default K3S will run with flannel as the CNI and VXLAN as the default backend. In this experiment, external etcd database is used as storage. For more data storage types, please refer to https://docs.rancher.cn/k3s/installation/datastore.html.

6.2 validation

  1 [root@master01 ~]# kubectl get nodes
  1 [root@master01 ~]# kubectl taint node master01 node-role.kubernetes.io/master="":NoSchedule
  2 [root@master01 ~]# kubectl taint node master01 node-role.kubernetes.io/master="":NoSchedule
  3 [root@master01 ~]# kubectl taint node master01 node-role.kubernetes.io/master="":NoSchedule

VII. High availability optimization

7.1 preserved installation

  1 [root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  2   do
  3     echo ">>> ${master_ip}"
  4     ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel"
  5     ssh root@${master_ip} "wget http://down.linuxsb.com:8888/software/keepalived-2.0.20.tar.gz"
  6     ssh root@${master_ip} "tar -zxvf keepalived-2.0.20.tar.gz"
  7     ssh root@${master_ip} "cd keepalived-2.0.20/ && ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
  8     ssh root@${master_ip} "systemctl enable keepalived && systemctl start keepalived"
  9   done
Tip: only Master01 node operation is required above, so as to realize automatic installation of all nodes.

7.2 creating a profile

  1 [root@master01 ~]# wget http://Down.linuxsb.com: 8888 / k3s? Ha.sh? Download the high availability auto configuration script
  2 [root@master01 ~]# vi k3s_ha.sh 				#Other parts remain default
  3 # master keepalived virtual ip address
  4 export K3SHA_VIP=172.24.12.254
  5 
  6 # master01 ip address
  7 export K3SHA_IP1=172.24.12.11
  8 
  9 # master02 ip address
 10 export K3SHA_IP2=172.24.12.12
 11 
 12 # master03 ip address
 13 export K3SHA_IP3=172.24.12.13
 14 
 15 # master01 hostname
 16 export K3SHA_HOST1=master01
 17 
 18 # master02 hostname
 19 export K3SHA_HOST2=master02
 20 
 21 # master03 hostname
 22 export K3SHA_HOST3=master03
 23 
 24 # master01 network interface name
 25 export K3SHA_NETINF1=eth0
 26 
 27 # master02 network interface name
 28 export K3SHA_NETINF2=eth0
 29 
 30 # master03 network interface name
 31 export K3SHA_NETINF3=eth0
 32 
 33 [root@master01 ~]# bash k3s_ha.sh
Explanation: only Master01 node operation is required above. After executing the script, the following list of configuration files will be produced:
After executing the k3s_ha.sh script, the following configuration files are generated automatically:
  • Keepalived: keepalived configuration file, located in the / etc/keepalived directory of each master node
  • Nginx LB: nginx LB load balancing configuration file, located in the / root / nginx LB directory of each master node

7.3 start Keepalived

  1 [root@master01 ~]# cat /etc/keepalived/keepalived.conf
  2 [root@master01 ~]# Cat / etc / preserved / check_apiserver.sh confirm the preserved configuration
  3 [root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  4   do
  5     echo ">>> ${master_ip}"
  6     ssh root@${master_ip} "systemctl restart keepalived.service"
  7     ssh root@${master_ip} "systemctl status keepalived.service"
  8     ssh root@${master_ip} "ping -c1 172.24.12.254"
  9   done
Tip: only Master01 node operation is required above, so that all nodes can start services automatically.

7.4 validation

  1 [root@master01 ~]# kubectl -n kube-system get pods | grep -E 'NAME|nginx'
  2 NAME                                      READY   STATUS      RESTARTS   AGE
  3 nginx-lb-2dk6z                            1/1     Running     0          2m56s
  4 nginx-lb-68s47                            1/1     Running     0          2m56s
  5 nginx-lb-nbc9l                            1/1     Running     0          2m56s
Tip: only Master01 node operation is required above, so that all nodes can start services automatically.

7.5 enable high availability

  1 [root@master01 ~]# vi /etc/rancher/k3s/k3s.yaml 
  2 ......
  3     server: https://172.24.12.254:16443
  4 ......
Tip: it is recommended to modify all master nodes as above.

VIII. worker node joining

8.1 worker nodes join the cluster

  1 [root@worker01 ~]# curl -sfL https://docs.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn \
  2 sh -s - agent --server https://172.24.12.254:16443 --token x120952576
  3 [root@worker01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
Tip: all worker nodes join the cluster as above.

8.2 other command parameters

  1 [root@master01 ~]# k3s server --help		#View more k3s server parameters
  2 [root@worker01 ~]# k3s agent --help		#View more parameters of k3s agent

IX. deployment of Longhorn

9.1 overview of Longhorn

Longhorn is an open source distributed block storage system for Kubernetes.
Tip: for more information, please refer to https://github.com/longhorn/longhorn.

9.2 Longhorn deployment

  1 [root@master01 ~]# yum -y install iscsi-initiator-utils
Tip: it is recommended that all nodes be installed as above.
  1 [root@master01 ~]# wget \
  2 https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
  3 [root@master01 ~]# vi longhorn.yaml
  1 #......
  2 ---
  3 kind: Service
  4 apiVersion: v1
  5 metadata:
  6   labels:
  7     app: longhorn-ui
  8   name: longhorn-frontend
  9   namespace: longhorn-system
 10 spec:
 11   type: NodePort			#Change to nodeport
 12   selector:
 13     app: longhorn-ui
 14   ports:
 15   - port: 80
 16     targetPort: 8000
 17     nodePort: 8888
 18 ---
 19 #......
Tip: it is recommended to pull related images in advance.
longhornio/longhorn-engine:v0.8.1
longhornio/longhorn-ui:v0.8.1
longhornio/longhorn-instance-manager:v1_20200301
quay.io/k8scsi/csi-resizer:v0.3.0
quay.io/k8scsi/csi-node-driver-registrar:v1.2.0

9.3 dynamic sc creation

Tip: a sc has been created after the default longhorn deployment, or you can manually write yaml to create it as follows.
  1 [root@master01 ~]# kubectl get sc
  2 NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
  3 ......
  4 longhorn               driver.longhorn.io      Delete          Immediate              true                   15m
  1 [root@master01 ~]# vi longhornsc.yaml
  1 kind: StorageClass
  2 apiVersion: storage.k8s.io/v1
  3 metadata:
  4   name: longhornsc
  5 provisioner: rancher.io/longhorn
  6 parameters:
  7   numberOfReplicas: "3"
  8   staleReplicaTimeout: "30"
  9   fromBackup: ""
  1 [root@master01 ~]# kubectl create -f longhornsc.yaml

9.4 testing PV and PVC

  1 [root@master01 ~]# vi longhornpod.yaml
  1 apiVersion: v1
  2 kind: PersistentVolumeClaim
  3 metadata:
  4   name: longhorn-pvc
  5 spec:
  6   accessModes:
  7     - ReadWriteOnce
  8   storageClassName: longhorn
  9   resources:
 10     requests:
 11       storage: 2Gi
 12 ---
 13 apiVersion: v1
 14 kind: Pod
 15 metadata:
 16   name: longhorn-pod
 17   namespace: default
 18 spec:
 19   containers:
 20   - name: volume-test
 21     image: nginx:stable-alpine
 22     imagePullPolicy: IfNotPresent
 23     volumeMounts:
 24     - name: volv
 25       mountPath: /data
 26     ports:
 27     - containerPort: 80
 28   volumes:
 29   - name: volv
 30     persistentVolumeClaim:
 31       claimName: longhorn-pvc
 32 
  1 [root@master01 ~]# kubectl create -f longhornpod.yaml 

reference resources:
https://docs.rancher.cn/k3s/
https://docs.rancher.cn/k3s/architecture.html

Tags: Linux ssh Kubernetes JSON Nginx

Posted on Wed, 13 May 2020 21:55:51 -0400 by k994519