Kubernetes v1.18.2 binary high availability deployment

1, Environment server information host name IP remarks k8s-master1 192.168.0.216 Master1,etcd1,node node k8s-mast...
1, Environment
2, Environment initialization
3, Kubernetes deployment
Conclusion
Reference link

1, Environment

server information

host name IP remarks k8s-master1 192.168.0.216 Master1,etcd1,node node k8s-master2 192.168.0.217 Master2,etcd2,node node k8s-master3 192.168.0.218 Master3,etcd3,node node slb lb.ypvip.com.cn Ali slb domain name of Internet

Alicloud is used in this environment. API Server is highly available through alicloud SLB. If the environment is not on the cloud, it can be implemented through nginx + preserved, or haproxy + preserved.

Service version and K8S cluster description

  • Ali slb sets TCP monitoring to monitor port 6443 (load to the master API server through layer 4).
  • All alicloud ECS hosts use CentOS 7.6.1810, and the kernel is upgraded to 5.x.
  • The K8S cluster uses Iptables mode (the configuration of Ipvs mode is reserved in the Kube proxy annotation)
  • Calico uses the IPIP mode
  • Cluster use default svc.cluster.local
  • 10.10.0.1 resolving ip for kubernetes svc cluster
  • Docker CE version 19.03.6
  • Kubernetes Version 1.18.2
  • Etcd Version v3.4.7
  • Calico Version v3.14.0
  • Coredns Version 1.6.7
  • Metrics-Server Version v0.3.6
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 6d23h

PS: the service version above is the latest version

Division of Service and Pods Ip segments

name IP network segment remarks service-cluster-ip 10.10.0.0/16 Available address 65534 pods-ip 10.20.0.0/16 Available address 65534 Cluster dns 10.10.0.2 For cluster service domain name resolution k8s svc 10.10.0.1 Cluster kubernetes svc resolves ip

2, Environment initialization

All cluster servers need to be initialized

2.1 stop all machine firewalld firewalls

$ systemctl stop firewalld $ systemctl disable firewalld

2.2 turn off swap

$ swapoff -a $ sed -i 's/.*swap.*/#&/' /etc/fstab

2.3 turn off Selinux

$ setenforce 0 $ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux $ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config $ sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux $ sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

2.4 set the host name, upgrade the kernel, and install Docker ce

Run the following init.sh Shell script, which completes the following four tasks:

  • Set server hostname
  • Install k8s dependent environment
  • Upgrade the system kernel (upgrade the Centos7 system kernel to solve the compatibility problem of docker CE version)
  • Install docker CE version 19.03.6

Run on each machine init.sh Script, for example:

Ps: init.sh The script is only used for Centos and supports repeated running.

# K8s-master1 machine running, init.sh The following parameter is to set the k8s-master1 server hostname $ chmod +x init.sh && ./init.sh k8s-master1 # End of execution init.sh Script, please restart the server $ reboot
#!/usr/bin/env bash function Check_linux_system(){ linux_version=`cat /etc/redhat-release` if [[ $ =~ "CentOS" ]];then echo -e "\033[32;32m The system is $ \033[0m \n" else echo -e "\033[32;32m The system is not CentOS,This script only supports CentOS Environmental Science\033[0m \n" exit 1 fi } function Set_hostname(){ if [ -n "$HostName" ];then grep $HostName /etc/hostname && echo -e "\033[32;32m Host name has been set, exit from setting host name step \033[0m \n" && return case $HostName in help) echo -e "\033[32;32m bash init.sh host name \033[0m \n" exit 1 ;; *) hostname $HostName echo "$HostName" > /etc/hostname echo "`ifconfig eth0 | grep inet | awk ''` $HostName" >> /etc/hosts ;; esac else echo -e "\033[32;32m Input is blank, please refer to bash init.sh host name \033[0m \n" exit 1 fi } function Install_depend_environment(){ rpm -qa | grep nfs-utils &> /dev/null && echo -e "\033[32;32m The dependent environment installation has been completed, exit the dependent environment installation step \033[0m \n" && return yum install -y nfs-utils curl yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl telnet echo -e "\033[32;32m upgrade Centos7 System kernel to version 5, solution Docker-ce Version compatibility issues\033[0m \n" rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && \ rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && \ yum --disablerepo=\* --enablerepo=elrepo-kernel repolist && \ yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64 && \ yum remove -y kernel-tools-libs.x86_64 kernel-tools.x86_64 && \ yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64 && \ grub2-set-default 0 modprobe br_netfilter cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf ls /proc/sys/net/bridge } function Install_docker(){ rpm -qa | grep docker && echo -e "\033[32;32m already installed docker,Exit setup docker step \033[0m \n" && return yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce-19.03.6 docker-ce-cli-19.03.6 systemctl enable docker.service systemctl start docker.service systemctl stop docker.service echo '{"registry-mirrors": ["https://4xr1qpsp.mirror.aliyuncs.com"], "log-opts": {"max-size":"500m", "max-file":"3"}}' > /etc/docker/daemon.json systemctl daemon-reload systemctl start docker } # Initialization order HostName=$1 Check_linux_system && \ Set_hostname && \ Install_depend_environment && \ Install_docker

3, Kubernetes deployment

Deployment sequence

  • 1. Self signed TLS certificate
  • 2. Deploy Etcd cluster
  • 3. Create metrics server certificate
  • 4. Get K8S binary package
  • 5. Create Node kubeconfig file
  • 6. Configure the Master component and run
  • 7. Configure automatic renewal of kubelet certificate
  • 8. Configure Node components and run
  • 9. Install calico network and use the IPIP mode
  • 10. Cluster CoreDNS deployment
  • 11. Deployment of cluster monitoring service Metrics Server
  • 12. Deploy Kubernetes Dashboard

3.1 self signed TLS certificate

Install cfssl in k8s-master1 and generate relevant certificates

# Create directory to hold SSL certificate $ mkdir /data/ssl -p # Download build certificate command $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 # Add execution permission $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 # Move to the /usr/local/bin directory $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
# Enter certificate directory $ cd /data/ssl/ # establish certificate.sh script $ vim certificate.sh

PS: certificate valid for 10 years

cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.0.216", "192.168.0.217", "192.168.0.218", "10.10.0.1", "lb.ypvip.com.cn", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

Modify according to your own environment certificate.sh script

"192.168.0.216", "192.168.0.217", "192.168.0.218", "10.10.0.1", "lb.ypvip.com.cn",

Modify the script and execute

$ bash certificate.sh

3.2 deploy Etcd cluster

Operate on the k8s-master1 machine, copy the execution file to k8s-master2 k8s-master3

Binary package download address: https://github.com/etcd-io/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz

# Create a directory to store etcd data $ mkdir /data/etcd/ # Create k8s cluster configuration directory $ mkdir /opt/kubernetes/ -p # Download the binary etcd package and put the execution file in the / opt/kubernetes/bin / directory $ cd /data/etcd/ $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz $ tar zxvf etcd-v3.4.7-linux-amd64.tar.gz $ cd etcd-v3.4.7-linux-amd64 $ cp -a etcd etcdctl /opt/kubernetes/bin/ # Add the / opt/kubernetes/bin directory to the PATH $ echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile $ source /etc/profile

Log in to k8s-master2 and k8s-master3 servers for operation

# Create k8s cluster configuration directory $ mkdir /data/etcd $ mkdir /opt/kubernetes/ -p # Add the / opt/kubernetes/bin directory to the PATH $ echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile $ source /etc/profile

Log in to k8s-master1 for operation

# Enter K8S cluster certificate directory $ cd /data/ssl # copy the certificate to k8s-master1 machine / opt/kubernetes/ssl / directory $ cp ca*pem server*pem /opt/kubernetes/ssl/ # copy etcd execution file and certificate to k8s-master2 k8s-master3 machine scp -r /opt/kubernetes/* root@k8s-master2:/opt/kubernetes scp -r /opt/kubernetes/* root@k8s-master3:/opt/kubernetes
$ cd /data/etcd # Script etcd configuration file $ vim etcd.sh
#!/bin/bash ETCD_NAME=$ ETCD_IP=$ ETCD_CLUSTER=$ cat <<EOF >/opt/kubernetes/cfg/etcd.yml name: $ data-dir: /var/lib/etcd/default.etcd listen-peer-urls: https://$:2380 listen-client-urls: https://$:2379,https://127.0.0.1:2379 advertise-client-urls: https://$:2379 initial-advertise-peer-urls: https://$:2380 initial-cluster: $ initial-cluster-token: etcd-cluster initial-cluster-state: new client-transport-security: cert-file: /opt/kubernetes/ssl/server.pem key-file: /opt/kubernetes/ssl/server-key.pem client-cert-auth: false trusted-ca-file: /opt/kubernetes/ssl/ca.pem auto-tls: false peer-transport-security: cert-file: /opt/kubernetes/ssl/server.pem key-file: /opt/kubernetes/ssl/server-key.pem client-cert-auth: false trusted-ca-file: /opt/kubernetes/ssl/ca.pem auto-tls: false debug: false logger: zap log-outputs: [stderr] EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server Documentation=https://github.com/etcd-io/etcd Conflicts=etcd.service After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify LimitNOFILE=65536 Restart=on-failure RestartSec=5s TimeoutStartSec=0 ExecStart=/opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable etcd systemctl restart etcd
# Execution etcd.sh Generate configuration script $ chmod +x etcd.sh $ ./etcd.sh etcd01 192.168.0.216 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380 # Check whether etcd starts normally $ ps -ef | grep etcd $ netstat -ntplu | grep etcd tcp 0 0 192.168.0.216:2379 0.0.0.0:* LISTEN 1558/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 1558/etcd tcp 0 0 192.168.0.216:2380 0.0.0.0:* LISTEN 1558/etcd # Put etcd.sh Script copy to k8s-master2 k8s-master3 machine $ scp /data/etcd/etcd.sh root@k8s-master2:/data/etcd/ $ scp /data/etcd/etcd.sh root@k8s-master3:/data/etcd/

Log in to k8s-master2

# Execution etcd.sh Generate configuration script $ chmod +x etcd.sh $ ./etcd.sh etcd02 192.168.0.217 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380 # Check whether etcd starts normally $ ps -ef | grep etcd $ netstat -ntplu | grep etcd

Log in to k8s-master3 for operation

# Execution etcd.sh Generate configuration script $ chmod +x etcd.sh $ ./etcd.sh etcd03 192.168.0.218 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380 # Check whether etcd starts normally $ ps -ef | grep etcd $ netstat -ntplu | grep etcd
# Log in to a master machine to check whether the etcd cluster is normal $ ETCDCTL_API=3 etcdctl --write-out=table \ --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem --key=/opt/kubernetes/ssl/server-key.pem \ --endpoints=https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 endpoint health +---------------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +---------------------------------+--------+-------------+-------+ | https://192.168.0.216:2379 | true | 38.721248ms | | | https://192.168.0.217:2379 | true | 38.621248ms | | | https://192.168.0.218:2379 | true | 38.821248ms | | +---------------------------------+--------+-------------+-------+

3.3 create metrics server certificate

Create the certificate used by metrics server

Log in to k8s-master1 for operation

$ cd /data/ssl/ # Note: "CN":“ system:metrics-server "It must be this, because this name is used in subsequent authorization, otherwise anonymous access will be forbidden $ cat > metrics-server-csr.json <<EOF { "CN": "system:metrics-server", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "system" } ] } EOF

Generate metrics server certificate and private key

# Generate certificate $ cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server # copy to / opt/kubernetes/ssl directory $ cp metrics-server-key.pem metrics-server.pem /opt/kubernetes/ssl/ # copy to k8s-master2 k8s-master3 machine $ scp metrics-server-key.pem metrics-server.pem root@k8s-master2:/opt/kubernetes/ssl/ $ scp metrics-server-key.pem metrics-server.pem root@k8s-master3:/opt/kubernetes/ssl/

3.4 get K8S binary package

Log in to k8s-master1 for operation

v1.18 Download page https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md

# Create directory to store k8s binary package $ mkdir /data/k8s-package $ cd /data/k8s-package # Download v1.18.2 binary package # The author uploads binary installation package to cdn https://cdm.yp14.cn/k8s-package/kubernetes-server-v1.18.2-linux-amd64.tar.gz $ wget https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz $ tar xf kubernetes-server-linux-amd64.tar.gz

The master node needs to use:

  • kubectl
  • kube-scheduler
  • kube-apiserver
  • kube-controller-manager

node nodes need to use:

  • kubelet
  • kube-proxy

PS: in this paper, the master node is also a node node, so the kubelet Kube proxy execution file is needed

# Enter the bin directory of the extracted binary package $ cd /data/k8s-package/kubernetes/server/bin # cpoy execution file to / opt/kubernetes/bin directory $ cp -a kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /opt/kubernetes/bin # copy the execution file to k8s-master2 k8s-master3 machine / opt / kubernets / bin directory $ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@k8s-master2:/opt/kubernetes/bin/ $ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@k8s-master3:/opt/kubernetes/bin/

3.5 create Node kubeconfig file

Log in to k8s-master1 for operation

  • Create TLS Bootstrapping Token
  • Create kubelet kubeconfig
  • Create Kube proxy kubeconfig
$ cd /data/ssl/ # Change line 10 KUBE_APISERVER address $ vim kubeconfig.sh
# Create TLS Bootstrapping Token export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF $,kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #---------------------- # Create kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://lb.ypvip.com.cn:6443" # Set cluster parameters kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=$ \ --kubeconfig=bootstrap.kubeconfig # Set client authentication parameters kubectl config set-credentials kubelet-bootstrap \ --token=$ \ --kubeconfig=bootstrap.kubeconfig # Set context parameters kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # Set default context kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # Create the Kube proxy kubeconfig file kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=$ \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# Generate certificate $ sh kubeconfig.sh # Output the following results kubeconfig.sh kube-proxy-csr.json kube-proxy.kubeconfig kube-proxy.csr kube-proxy-key.pem kube-proxy.pem bootstrap.kubeconfig
# copy *kubeconfig file to / opt/kubernetes/cfg directory $ cp *kubeconfig /opt/kubernetes/cfg # copy to k8s-master2 k8s-master3 machine $ scp *kubeconfig root@k8s-master2:/opt/kubernetes/cfg $ scp *kubeconfig root@k8s-master3:/opt/kubernetes/cfg

3.6 configure Master components and run

Log in to k8s-master1 k8s-master2 k8s-master3 for operation

# Create the / data / k8s master directory to store the master configuration execution script $ mkdir /data/k8s-master

Log in to k8s-master1

$ cd /data/k8s-master # Create script to generate Kube API server configuration file $ vim apiserver.sh
#!/bin/bash MASTER_ADDRESS=$ ETCD_SERVERS=$ cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --etcd-servers=$ \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=$ \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.0.0/16 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\ --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/server.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem \\ --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \\ --proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \\ --runtime-config=api/all=true \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-truncate-enabled=true \\ --audit-log-path=/var/log/kubernetes/k8s-audit.log" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
# Create script to generate Kube controller manager configuration file $ vim controller-manager.sh
#!/bin/bash MASTER_ADDRESS=$ cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=2 \\ --master=$:8080 \\ --leader-elect=true \\ --bind-address=0.0.0.0 \\ --service-cluster-ip-range=10.10.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s \\ --feature-gates=RotateKubeletServerCertificate=true \\ --feature-gates=RotateKubeletClientCertificate=true \\ --allocate-node-cidrs=true \\ --cluster-cidr=10.20.0.0/16 \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
# Create script to generate Kube scheduler profile $ vim scheduler.sh
#!/bin/bash MASTER_ADDRESS=$ cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=2 \\ --master=$:8080 \\ --address=0.0.0.0 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
# Add execution permission $ chmod +x *.sh $ cp /data/ssl/token.csv /opt/kubernetes/cfg/ # copy token.csv And master to k8s-master2 k8s-master3 $ scp /data/ssl/token.csv root@k8s-master2:/opt/kubernetes/cfg $ scp /data/ssl/token.csv root@k8s-master3:/opt/kubernetes/cfg $ scp apiserver.sh controller-manager.sh scheduler.sh root@k8s-master2:/data/k8s-master $ scp apiserver.sh controller-manager.sh scheduler.sh root@k8s-master3:/data/k8s-master # Generate master profile and run $ ./apiserver.sh 192.168.0.216 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 $ ./controller-manager.sh 127.0.0.1 $ ./scheduler.sh 127.0.0.1 # Check whether the three services of the master are running normally $ ps -ef | grep kube $ netstat -ntpl | grep kube-

Log in to k8s-master2

$ cd /data/k8s-master # Generate master profile and run $ ./apiserver.sh 192.168.0.217 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 $ ./controller-manager.sh 127.0.0.1 $ ./scheduler.sh 127.0.0.1 # Check whether the three services of the master are running normally $ ps -ef | grep kube $ netstat -ntpl | grep kube-

Log in to k8s-master3 for operation

$ cd /data/k8s-master # Generate master profile and run $ ./apiserver.sh 192.168.0.218 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 $ ./controller-manager.sh 127.0.0.1 $ ./scheduler.sh 127.0.0.1 # Check whether the three services of the master are running normally $ ps -ef | grep kube $ netstat -ntpl | grep kube-
# Log in to a master to check the health status of the cluster $ kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}

3.7 configure automatic renewal of kubelet certificate

Log in to k8s-master1 for operation

Create a ClusterRole that automatically approves related CSR requests

# Create certificate rotation configuration store directory $ mkdir ~/yaml/kubelet-certificate-rotating $ cd ~/yaml/kubelet-certificate-rotating $ vim tls-instructs-csr.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"]
# deploy $ kubectl apply -f tls-instructs-csr.yaml

Automatically approve the CSR request of the first certificate application of the kubelet bootstrap user TLS bootstrapping

$ kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap

Automatic approval system:nodes CSR request for group user to update kubelet's own communication certificate with apiserver

$ kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes

Automatic approval system:nodes CSR request for group user to update kubelet 10250 api port certificate

$ kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes

3.8 configure Node components and run

First of all, let's look at kubelet kubelet.kubeconfig How is the configuration generated?

kubelet.kubeconfig Configuration is generated through TLS Bootstrapping mechanism. The following is the generated flow chart.

Log in to k8s-master1 k8s-master2 k8s-master3 for operation

# Create node generation configuration script directory $ mkdir /data/k8s-node

Log in to k8s-master1 for operation

# Create build kubelet configuration script $ vim kubelet.sh
#!/bin/bash DNS_SERVER_IP=$ HOSTNAME=$ CLUETERDOMAIN=$ cat <<EOF >/opt/kubernetes/cfg/kubelet.conf KUBELET_OPTS="--logtostderr=true \\ --v=2 \\ --hostname-override=$ \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/opt/kubernetes/ssl \\ --network-plugin=cni \\ --cni-conf-dir=/etc/cni/net.d \\ --cni-bin-dir=/opt/cni/bin \\ --pod-infra-container-image=yangpeng2468/google_containers-pause-amd64:3.2" EOF cat <<EOF >/opt/kubernetes/cfg/kubelet-config.yml kind: KubeletConfiguration # Working with objects apiVersion: kubelet.config.k8s.io/v1beta1 # api version address: 0.0.0.0 # Monitor address port: 10250 # Port of the current kubelet readOnlyPort: 10255 # Ports exposed by kubelet cgroupDriver: cgroupfs # Driver, to be consistent with the driver displayed in docker info clusterDNS: - $ clusterDomain: $ # Cluster domain failSwapOn: false # Turn off swap # Authentication authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/ssl/ca.pem # to grant authorization authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s # Node resource reservation evictionHard: imagefs.available: 15% memory.available: 1G nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s # Mirror delete policy imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s # Certificate of rotation rotateCertificates: true # Rotate kubelet client certificate featureGates: RotateKubeletServerCertificate: true RotateKubeletClientCertificate: true maxOpenFiles: 1000000 maxPods: 110 EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet.conf ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
# Create build Kube proxy configuration script $ vim proxy.sh
#!/bin/bash HOSTNAME=$ cat <<EOF >/opt/kubernetes/cfg/kube-proxy.conf KUBE_PROXY_OPTS="--logtostderr=true \\ --v=2 \\ --config=/opt/kubernetes/cfg/kube-proxy-config.yml" EOF cat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 address: 0.0.0.0 # Monitor address metricsBindAddress: 0.0.0.0:10249 # Monitor the index address, and get relevant information from here clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig # Read profile hostnameOverride: $ # Unique node name registered to k8s clusterCIDR: 10.10.0.0/16 # service IP range mode: iptables # Using iptables mode # Using ipvs mode #mode: ipvs # ipvs mode #ipvs: # scheduler: "rr" #iptables: # masqueradeAll: true EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
# Generate node profile $ ./kubelet.sh 10.10.0.2 k8s-master1 cluster.local $ ./proxy.sh k8s-master1 # Check whether the service is started $ netstat -ntpl | egrep "kubelet|kube-proxy" # copy kubelet.sh proxy.sh Script to k8s-master2 k8s-master3 machine $ scp kubelet.sh proxy.sh root@k8s-master2:/data/k8s-node $ scp kubelet.sh proxy.sh root@k8s-master3:/data/k8s-node

Log in to k8s-master2

$ cd /data/k8s-node # Generate node profile $ ./kubelet.sh 10.10.0.2 k8s-master2 cluster.local $ ./proxy.sh k8s-master2 # Check whether the service is started $ netstat -ntpl | egrep "kubelet|kube-proxy"

Log in to k8s-master3 for operation

$ cd /data/k8s-node # Generate node profile $ ./kubelet.sh 10.10.0.2 k8s-master3 cluster.local $ ./proxy.sh k8s-master3 # Check whether the service is started $ netstat -ntpl | egrep "kubelet|kube-proxy"
# Log in to a master machine to check whether the node node is added successfully $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 NoReady <none> 4d4h v1.18.2 k8s-master2 NoReady <none> 4d4h v1.18.2 k8s-master3 NoReady <none> 4d4h v1.18.2

The Node above handles the NoReady status because the network components have not been installed yet. The network components are installed below.

Solve the problem of unable to query pods log
$ vim ~/yaml/apiserver-to-kubelet-rbac.yml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-api-admin subjects: - kind: User name: kubernetes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:kubelet-api-admin apiGroup: rbac.authorization.k8s.io
# application $ kubectl apply -f ~/yaml/apiserver-to-kubelet-rbac.yml

3.9 install calico network and use IPIP mode

Log in to k8s-master1 for operation

Download Calico Version v3.14.0 Yaml file

# Store etcd yaml file $ mkdir -p ~/yaml/calico $ cd ~/yaml/calico # Note: the following is the configuration file based on the self built etcd as the storage $ curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -O
calico-etcd.yaml The following configuration needs to be modified: Secret configuration modification
apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: etcd-key: (cat /opt/kubernetes/ssl/server-key.pem | base64 -w 0) # Fill in the output here etcd-cert: (cat /opt/kubernetes/ssl/server.pem | base64 -w 0) # Fill in the output here etcd-ca: (cat /opt/kubernetes/ssl/ca.pem | base64 -w 0) # Fill in the output here
ConfigMap configuration modification
kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: etcd_endpoints: "https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379" etcd_ca: "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key"

The main parameters of ConfigMap are as follows:

  • etcd_endpoints: Calico uses etcd to save the network topology and state. This parameter specifies the address of etcd. You can use etcd used by K8S Master or build it separately.
  • calico_backend: the backend of Calico, which is bird by default.
  • cni_network_config: CNI compliant network configuration, where type=calico means Kubelet from CNI_ The path (default is / opt/cni/bin) directory looks for the calico executable, which is used to allocate the IP address of the container.
  • If TLS security authentication is configured for etcd, the corresponding ca, cert, key and other files need to be specified
Modify the IP network segment used by Pods. By default, 192.168.0.0/16 network segment is used
- name: CALICO_IPV4POOL_CIDR value: "10.20.0.0/16"
Configure network card auto discovery rules

Add network card discovery rules in daemonset calico node env

# Define rules for automatic discovery of network card in ipv4 - name: IP_AUTODETECTION_METHOD value: "interface=eth.*" # Define ipv6 autodiscover network card rules - name: IP6_AUTODETECTION_METHOD value: "interface=eth.*"
Calico mode settings
# Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always"

Calico has two network modes: BGP and IPIP

  • Setting Calico when using IPIP mode_ IPV4POOL_ IPIP = "always", IPIP is a mode of making a tunnel between the routes of each Node and connecting the two networks. When the IPIP mode is enabled, Calico will create a virtual network interface named tunl0 on each Node.
  • Setting calico when using BGP mode_ IPV4POOL_ IPIP="off"
Error resolution

Error: [error] [8] startup/ startup.go 146: failed to query kubeadm's config map error=Get https://10.10.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Reason: the Node work Node cannot connect to the apiserver address. Check the calico configuration file. To configure the IP and port of the apiserver, if not, the calico will set the default calico network segment and port 443 by default. Field name: kuternets_ SERVICE_ HOST,KUBERNETES_SERVICE_PORT,KUBERNETES_SERVICE_PORT_HTTPS.

resolvent:

Add environment variable in DaemonSet calico-node env

- name: KUBERNETES_SERVICE_HOST value: "lb.ypvip.com.cn" - name: KUBERNETES_SERVICE_PORT value: "6443" - name: KUBERNETES_SERVICE_PORT_HTTPS value: "6443"

Modify calico-etcd.yaml After, deploy

# deploy $ kubectl apply -f calico-etcd.yaml # View calico pods $ kubectl get pods -n kube-system | grep calico # Check whether the node is normal. Now the node service is normal $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 Ready <none> 4d4h v1.18.2 k8s-master2 Ready <none> 4d4h v1.18.2 k8s-master3 Ready <none> 4d4h v1.18.2

3.10 cluster CoreDNS deployment

Log in to k8s-master1 for operation

deploy.sh Is a convenient script for generating coredns yaml configuration.

# Installation depends on jq command $ yum install jq -y $ cd ~/yaml $ mkdir coredns $ cd coredns # Download CoreDNS project $ git clone https://github.com/coredns/deployment.git $ cd coredns/deployment/kubernetes

Cluster by default_ DNS_ ip automatically obtains cluster ip of Kube DNS, but since Kube DNS is not deployed, only one cluster ip can be specified manually.

111 if [[ -z $CLUSTER_DNS_IP ]]; then 112 # Default IP to kube-dns IP 113 # CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}") 114 CLUSTER_DNS_IP=10.10.0.2
# View the execution effect, but not start the deployment $ ./deploy.sh # Execute deployment $ ./deploy.sh | kubectl apply -f - # View Coredns $ kubectl get svc,pods -n kube-system| grep coredns

Test Coredns analysis

# Create a busybox Pod $ vim busybox.yaml
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28.4 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always
# deploy $ kubectl apply -f busybox.yaml # Test the resolution. The resolution is normal $ kubectl exec -i busybox -n default nslookup kubernetes Server: 10.10.0.2 Address 1: 10.10.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.10.0.1 kubernetes.default.svc.cluster.local

3.11 deployment of cluster monitoring service Metrics Server

Log in to k8s-master1 for operation

$ cd ~/yaml # Pull version v0.3.6 $ git clone https://github.com/kubernetes-sigs/metrics-server.git -b v0.3.6 $ cd metrics-server/deploy/1.8+

Only modify metrics server- deployment.yaml configuration file

# Here are the differences before and after the modification $ git diff metrics-server-deployment.yaml diff --git a/deploy/1.8+/metrics-server-deployment.yaml b/deploy/1.8+/metrics-server-deployment.yaml index 2393e75..2139e4a 100644 --- a/deploy/1.8+/metrics-server-deployment.yaml +++ b/deploy/1.8+/metrics-server-deployment.yaml @@ -29,8 +29,19 @@ spec: emptyDir: {} containers: - name: metrics-server - image: k8s.gcr.io/metrics-server-amd64:v0.3.6 - imagePullPolicy: Always + image: yangpeng2468/metrics-server-amd64:v0.3.6 + imagePullPolicy: IfNotPresent + resources: + limits: + cpu: 400m + memory: 1024Mi + requests: + cpu: 50m + memory: 50Mi + command: + - /metrics-server + - --kubelet-insecure-tls + - --kubelet-preferred-address-types=InternalIP volumeMounts: - name: tmp-dir mountPath: /tmp
# deploy $ kubectl apply -f . # verification $ kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master1 72m 7% 1002Mi 53% k8s-master2 121m 3% 1852Mi 12% k8s-master3 300m 3% 1852Mi 20% # Memory unit Mi=1024*1024 bytes M=1000*1000 bytes # CPU unit: 1 core = 1000m, that is 250m=1/4 core

3.12 deploy Kubernetes Dashboard

For Kubernetes Dashboard deployment, please refer to K8S Dashboard 2.0 deployment article.

Conclusion

Kubernetes v1.18.2 binary deployment, the author tested no pit. This deployment article can be used directly for production environment deployment. The whole kubernetes component deployment is included in all aspects.

20 May 2020, 02:20 | Views: 9433

Add new comment

For adding a comment, please log in
or create account

0 comments