k8s cluster deployment

Introduction to K8S:
Official documents: https://kubernetes.io/zh/docs/concepts/overview/what-is-kubernetes/

(1) Add Ali docker source
shell> wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

(2) Install docker
shell>yum -y install docker-ce
shell>docker -v
shell> systemctl enable docker
shell>systemctl start docker

(3) Install kubernetes, increase source
shell> cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
##All node installations
shell> yum install -y kubelet kubeadm kubectl
shell> systemctl enable kubelet && systemctl start kubelet

(4) Initialize k8s master
shell> kubeadm init --apiserver-advertise-address 10.10.202.140 --pod-network-cidr=10.244.0.0/16

kubeadm init \
--apiserver-advertise-address=10.10.202.140 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address indicates which interface of Master is used to communicate with other Cluster nodes.
If Master has more than one interface, it is recommended that you specify it explicitly. If you do not, kubeadm automatically selects an interface with a default gateway.
--pod-network-cidr specifies the range of the Pod network.Kubernetes supports a variety of network scenarios, and different network scenarios have their own requirements for--pod-network-cidr. This is set to 10.244.0.0/16 because we are going to use the flannel network scenario and must be set to this CIDR

[root@node140 /]# kubeadm init \

--apiserver-advertise-address=10.10.202.140 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16
W1211 22:26:52.608250 70792 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1211 22:26:52.608464 70792 version.go:102] falling back to the local client version: v1.17.0
W1211 22:26:52.608775 70792 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1211 22:26:52.608797 70792 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node140 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.202.140]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node140 localhost] and IPs [10.10.202.140 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node140 localhost] and IPs [10.10.202.140 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1211 22:27:45.746769 70792 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1211 22:27:45.748837 70792 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.003938 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node140 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: y6wdsf.dkce7wf8lij4rbgf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.202.140:6443 --token y6wdsf.dkce7wf8lij4rbgf \
--discovery-token-ca-cert-hash sha256:2c307c40531df0dec0908647a9913c09174a0962531694c383fbc14315c1ae07

kubeadm performs pre-initialization checks.

(2) Generate token s and certificates.

(3) Generating a KubeConfig file, which kubelet needs to communicate with Master.

(4) Installing a Master component downloads a Docker image of the component from Google's Registry, which may take some time, depending on the quality of the network.

Install additional components kube-proxy and kube-dns.

Kubernetes Master initialized successfully.

Tips on how to configure kubectl, which will be practiced later.

Tips on how to install the Pod network, which will be practiced later.

Tips on how to register other nodes to Cluster, which will be practiced later.

(5) Add tab
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

(6) Install pod network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

(7) Host add into cluster

shell> kubeadm join 10.10.202.140:6443 --token y6wdsf.dkce7wf8lij4rbgf \
--discovery-token-ca-cert-hash sha256:2c307c40531df0dec0908647a9913c09174a0962531694c383fbc14315c1ae07

Errors: 3 Errors
[preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

First error: dockers are not systemd startup, process modifies docker.service
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd

The second is not set: systemctl enable kubelet.service
The third kernel is not set: echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
Solve specific errors

(7) Configure kubectl
shell> mkdir -p $HOME/.kube
shell>sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
shell> sudo chown $(id -u):$(id -g) $HOME/.kube/config

(8) View cluster status
shell> kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}

(9) Join nodes in clusters
Step 1: Environmental preparation
1.node closes firewall and selinux
2. Disable swap

  1. Resolve Host Name
    4. Start Kernel Functions
    Start kubeket

Step 2: Add node141 node142 node143
shell> kubeadm join 10.10.202.141:6443 --token y6wdsf.dkce7wf8lij4rbgf \
--discovery-token-ca-cert-hash sha256:2c307c40531df0dec0908647a9913c09174a0962531694c383fbc14315c1ae07

(10) View clusters
shell> kubectl get nodes
NAME STATUS ROLES AGE VERSION
node140 Ready master 90d v1.17.0
node141 Ready <none> 90d v1.17.0
node142 Ready <none> 90d v1.17.0
node143 Ready <none> 90d v1.17.0
Wait a while to become read

(11) Method of removing a node:
(1) Enter maintenance mode
shell> kubectl drain host1 --delete-local-data --force --ignore-daemonsets
(2) Delete nodes
shell> kubectl delete node node141

Tags: Linux Kubernetes kubelet network Docker

Posted on Wed, 11 Mar 2020 12:15:37 -0400 by mol