Installing kubernetes using kubeadm_ v1.20.x

Installing kubernetes using kubeadm_ v1.20.x


Official documents:

Our deployment process here is:

  • 1. Kubedm init initialization. Modify the cri to containerd when the container is running. The crictl command cannot be completed. The usage of this command is similar to docker. I will write another article on the use of crictl later.

  • 2. Use calico network plug-in instead of flannel

  • 3. Install metrics server to obtain monitoring data

  • 4. Installing kubernetes dashboard makes the web interface more intuitive

  • 5.ingress-nginx

  • 6. After debugging, test the rbac permissions, and write a yaml file for the service account not used by different users

Container Runtime

  • From Kubernetes v1.20, the dependency of docker is removed by default. If docker and containerd are installed on the host, docker will be used as the container running engine first. If docker is not installed on the host, containerd will be used as the container running engine;
  • This paper uses containerd as the container to run the engine;

About binary installation

  • Kubedm is the installation method officially supported by kubernetes, and "binary" is not. This document uses the kubeadm tool officially recommended by to install the kubernetes cluster.

Check centos / hostname

# Both the master node and the worker node need to be executed
cat /etc/redhat-release

# The output of hostname here will be the node name of the machine in the Kubernetes cluster
# You cannot use localhost as the name of a node

# Please use the lscpu command to check the CPU information
# Architecture: x86_64 this installation document does not support arm architecture
# CPU (s): the number of CPU cores cannot be less than 2

Modify hostname

If you need to modify the hostname, execute the following command:

# Modify hostname
hostnamectl set-hostname your-new-host-name
# View modification results
hostnamectl status
# Set hostname resolution
echo "   $(hostname)" >> /etc/hosts

Check network

Execute commands on all nodes

[root@demo-master-a-1 ~]$ ip route show
default via dev eth0 dev eth0 scope link metric 1002 dev eth0 proto kernel scope link src 

[root@demo-master-a-1 ~]$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:12:a4:1b brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic eth0
       valid_lft 305741654sec preferred_lft 305741654sec

IP address used by kubelet:

  • In the ip route show command, you can know the default network card of the machine, usually eth0, such as default via dev eth0
  • In the ip address command, the ip address of the default network card can be displayed. Kubernetes will use this ip address to communicate with other nodes in the cluster, such as
  • The IP addresses used by Kubernetes on all nodes must be interoperable (no NAT mapping, no security group or firewall isolation)

Install containerd / kubelet / kubedm / kubectl

Use root to execute the following code on all nodes to install the software:

  • containerd
  • nfs-utils
  • kubectl / kubeadm / kubelet

Execute the following code manually, and the result is the same as the quick installation. Replace in line 79 (highlighted) of the script with the version number you need, such as 1.20.6

Please select any docker hub image according to your network

# Both the master node and the worker node need to be executed
# The last parameter, 1.20.6, specifies the kubenetes version, which supports all 1.20.x installations
# Alibaba cloud docker hub image


# Both the master node and the worker node need to be executed

# Install containerd
# Reference documents are as follows

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

sudo modprobe overlay
sudo modprobe br_netfilter

# Setup required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

# Apply sysctl params without reboot
sysctl --system

# Uninstall old version
yum remove -y

# Set up yum repository
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo

# Install containerd
yum install -y

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

sed -i ""  /etc/containerd/config.toml
sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sed -i "s#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml

systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd

# Installing NFS utils
# nfs utils must be installed before nfs networked storage can be mounted
yum install -y nfs-utils
yum install -y wget

# Turn off firewall
systemctl stop firewalld
systemctl disable firewalld

# Close seLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# Close swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# Configure yum source for K8S
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

# Uninstall old version
yum remove -y kubelet kubeadm kubectl

# Install kubelet, kubedm, kubectl
# Replace  with the kubernetes version number, for example, 1.20.6
yum install -y kubelet-${1} kubeadm-${1} kubectl-${1}
crictl config runtime-endpoint /run/containerd/containerd.sock

# Restart docker and start kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

containerd --version
kubelet --version

If executed at this time systemctl status kubelet Command, will get kubelet Error prompt of startup failure, please ignore this error because you must complete the following steps kubeadm init Operation of, kubelet To start normally.

Initialize master node

Environment variables used during initialization:

  • APISERVER_NAME cannot be the hostname of the master
  • APISERVER_NAME must be all lowercase letters, numbers and decimal points, and cannot contain minus signs
  • POD_ The network segment used by subnet cannot overlap the network segment where the master node / worker node is located. The value of this field is a CIDR value. If you are not familiar with the concept of CIDR, please still execute export POD_SUBNET= command, no modification.

Manually execute the following code, and the result is the same as quick initialization. Replace in line 21 (highlighted) of the script with the version number you need, such as 1.20.6

# Execute only on the master node
# Replace x.x.x.x with the intranet IP of the master node
# The export command is only valid in the current shell session. After opening a new shell window, if you want to continue the installation process, please re execute the export command here
export MASTER_IP=x.x.x.x

# Replace apiserver.demo with the dnsName you want
export APISERVER_NAME=apiserver.demo

# The network segment where the kubernetes container group is located. After installation, the network segment is created by kubernetes and does not exist in your physical network in advance
export POD_SUBNET=
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
# Execute only on the master node
# Terminate execution on script error
set -e
if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m Make sure you have set the environment variable POD_SUBNET and APISERVER_NAME \033[0m"
  echo current POD_SUBNET=$POD_SUBNET
  exit 1

# View full configuration options
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
kind: ClusterConfiguration
kubernetesVersion: v${1}
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
  serviceSubnet: ""
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
  type: CoreDNS
  imageTag: 1.8.0

kind: KubeletConfiguration
cgroupDriver: systemd

# kubeadm init
# Depending on the network speed of your server, you need to wait 3 - 10 minutes
echo ""
echo "Please wait while the image is captured..."
kubeadm config images pull --config=kubeadm-config.yaml
echo ""
echo "initialization Master node"
kubeadm init --config=kubeadm-config.yaml --upload-certs

# Configure kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# Install calico network plug-in
# Reference documents
echo ""
echo "install calico-3.17.1"
rm -f calico-3.17.1.yaml
kubectl create -f
sed -i "s#${POD_SUBNET}#" calico-custom-resources.yaml
kubectl create -f calico-custom-resources.yaml

If the following error occurs:

[config/images] Pulled
[config/images] Pulled
failed to pull image "": output: time="2021-04-30T13:26:14+08:00" level=fatal 
msg="pulling image failed: rpc error: code = NotFound desc = failed to pull and unpack image \"\": 
failed to resolve reference \"\": not found", error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

Execute the following command:

Add the parameter / coredns at the end of the original command

curl -sSL | sh -s 1.20.6 /coredns

Check the initialization result of the master

# Execute only on the master node
# Execute the following command and wait for 3-10 minutes until all container groups are in Running state
watch kubectl get pod -n kube-system -o wide

# View the initialization results of the master node
kubectl get nodes -o wide

Initialize worker node

Get join command parameters
Execute on the master node

# Execute only on the master node
kubeadm token create --print-join-command
Available kubeadm join Commands and parameters, as shown below
# Output of kubedm token create command
kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303

Should token The valid time of is 2 hours. Within 2 hours, you can use this token Initialize any number of worker Node.

# Initialize worker
 For all worker Node execution

# Execute only on the worker node
# Replace x.x.x.x with the intranet IP of the master node
export MASTER_IP=x.x.x.x
# Replace apiserver.demo with apiserver used when initializing the master node_ NAME
export APISERVER_NAME=apiserver.demo
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# Replace with the output of the kubedm token create command on the master node
kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303
# Check initialization results
 stay master Execute on node

# Execute only on the master node
kubectl get nodes -o wide
The output results are as follows:
[root@demo-master-a-1 ~]# kubectl get nodes
demo-master-a-1   Ready    master   5m3s    v1.20.x
demo-worker-a-1   Ready    <none>   2m26s   v1.20.x
demo-worker-a-2   Ready    <none>   3m56s   v1.20.x

Install metrics server - K8s resource monitoring indicators

Reference link:

K8S resource indicator acquisition tool: metrics server
Monitoring tools for user-defined indicators: prometheus, k8s prometheus adapter

prometheus: prometheus can collect resource indicators in various dimensions, such as CPU utilization, number of network connections, sending and receiving rate of network messages, including process creation and recovery rate, and can monitor many indicators. These indicators were not supported in the early k8s, so it is necessary to integrate various indicators collected by prometheus into k8s, K8s can judge whether it is necessary to scale the pod according to these indicators.

prometheus is used not only as a monitoring system, but also as a provider of some special resource indicators. However, these indicators are not standard K8S built-in indicators, which are called user-defined indicators. However, if prometheus wants to display the data collected by monitoring as indicators, it needs a plug-in, which is called K8S prometheus adapter. These indicators are the basic criteria for judging whether the pod needs to be scaled, such as cpu utilization and memory usage.

With the introduction of prometheus and k8s prometheus adapter, a new generation of k8s architecture has been formed.

K8S next generation architecture

Core index pipeline: composed of kubelet, metrics server and api provided by API server; Cumulative CPU utilization, real-time memory utilization, pod resource utilization and container disk utilization;

Monitoring pipeline: it is used to collect various indicator data from the system and provide it to end users, storage systems and HPA, including core indicators and many other non core indicators. Non core indicators themselves cannot be parsed by k8s. Therefore, k8s prometheus adapter is required to convert the data collected by prometheus into a format that k8s can understand and use for k8s.

Core indicator monitoring

heapster was used before, but it was abandoned after 1.12, and the replacement used later is metrics server; Metrics server is an api server developed by users. It is used for service resource indicators, not for service pod and deploy. Metrics server itself is not a part of k8s, but a pod running on k8s. If you want users to seamlessly use the api services provided by metrics server on k8s, you need to combine them in the new generation architecture. As shown in the figure, an aggregator is used to aggregate k8s api server and metrics server, and then obtained by the group / apis/

Figure 1

Later, if the user has other APIs, the server can be integrated into the aggregator to provide services, as shown in the figure.

Figure 2

Check the k8s default API version, and you can see that there is no kubectl API versions in the group

When you deploy metrics server and check API versions, you can see the group

Deploying metrics server

Go to addons under cluster under kubernetes project, find the corresponding project and download it

[root@master bcia]# mkdir metrics-server -p 
[root@master bcia]# cd metrics-server/

# Download all files at once
[root@master metrics-server]# for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml ; do wget$file;done    
--2019-11-02 10:18:10--
Resolving (
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 398 [text/plain]
Saving to: 'auth-delegator.yaml'

100%[==========================================================================>] 398         --.-K/s   in 0s

[root@master metrics-server]# ls
auth-delegator.yaml  metrics-apiservice.yaml         metrics-server-service.yaml
auth-reader.yaml     metrics-server-deployment.yaml  resource-reader.yaml

# Run all files at once
[root@master metrics-server]# kubectl apply -f .  created created created
serviceaccount/metrics-server created
configmap/metrics-server-config created
deployment.apps/metrics-server-v0.3.6 created
service/metrics-server created created created

If an error is found after running, delete all at once and modify several places, as shown in the figure

  • 1.metrics-server-deployment.yaml
    Adding -- - kubelet secure TLS to the command of metrics server means that the certificate of the client is not verified. Note out port 10255, and then use 10250 to communicate through https. Write the specific values of cpu, memory and extra memory in the command of addon Resizer, and comment out minclustersize = {undefined {metrics_server_min_cluster_size}}

Figure 1

Figure 2

  • 2. Add nodes/stats to resource-reader.yaml, as shown in the figure

Figure 3

Test whether it can be used

  • 1. Check whether pods works normally
[root@master metrics-server]# kubectl get pods -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-bzgss                 1/1     Running   0          9d
coredns-8686dcc4fd-xgd49                 1/1     Running   0          9d
etcd-master                              1/1     Running   0          9d
kube-apiserver-master                    1/1     Running   0          9d
kube-controller-manager-master           1/1     Running   0          9d
kube-flannel-ds-amd64-52d6n              1/1     Running   0          9d
kube-flannel-ds-amd64-k8qxt              1/1     Running   0          8d
kube-flannel-ds-amd64-lnss4              1/1     Running   0          9d
kube-proxy-4s5mf                         1/1     Running   0          8d
kube-proxy-b6szk                         1/1     Running   0          9d
kube-proxy-wsnfz                         1/1     Running   0          9d
kube-scheduler-master                    1/1     Running   0          9d
kubernetes-dashboard-76f6bf8c57-rncvn    1/1     Running   0          8d
metrics-server-v0.3.6-677d79858c-75vk7   2/2     Running   0          18m
tiller-deploy-57c977bff7-tcnrf           1/1     Running   0          7d20h
  • 2. Check API versions and you will see that there are more

Figure 4

  • 3. Check node and pod monitoring indicators
[root@master metrics-server]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   145m         3%     1801Mi          11%
node2    697m         17%    12176Mi         77%
node3    838m         20%    12217Mi         77%
[root@master metrics-server]# kubectl top pods
NAME                                            CPU(cores)   MEMORY(bytes)
account-deploy-6d86f9df74-khv4v                 5m           444Mi
admin-deploy-55dcf4bc4d-srw8m                   2m           317Mi
backend-deploy-6f7bdd9bf4-w4sqc                 4m           497Mi
crm-deploy-7879694578-cngzp                     4m           421Mi
device-deploy-77768bf87c-ct5nc                  5m           434Mi
elassandra-0                                    168m         4879Mi
gateway-deploy-68c988676d-wnqsz                 4m           379Mi
jhipster-alerter-74fc8984c4-27bx8               1m           46Mi
jhipster-console-85556468d-kjfg6                3m           119Mi
jhipster-curator-67b58477b9-5f8br               1m           11Mi
jhipster-logstash-74878f8b49-mpn62              59m          860Mi
jhipster-zipkin-5b5ff7bdbc-bsxhk                1m           1571Mi
order-deploy-c4c846c54-2gxkp                    5m           440Mi
pos-registry-76bbd6c689-q5w2b                   442m         474Mi
recv-deploy-5dd686c947-v4qqh                    5m           424Mi
store-deploy-54c994c9b6-82b8z                   6m           493Mi
task-deploy-64c9984d88-fqxqq                    6m           461Mi
wiggly-cat-redis-ha-sentinel-655f7b5f9d-bbrz6   4m           4Mi
wiggly-cat-redis-ha-sentinel-655f7b5f9d-bj4bq   4m           5Mi
wiggly-cat-redis-ha-sentinel-655f7b5f9d-f9pdd   4m           5Mi
wiggly-cat-redis-ha-server-b58c8d788-6xlwk      3m           11Mi
wiggly-cat-redis-ha-server-b58c8d788-r949h      3m           8Mi
wiggly-cat-redis-ha-server-b58c8d788-w2gtb      3m           22Mi

At this point, the metrics server deployment is complete.

k8s deploy dashboard

reference resources:

# Download the yaml file for dashboard

# The default Dashboard can only be accessed inside the cluster. Modify the Service to NodePort type and expose it to the outside
vim recommended.yaml

kind: Service
apiVersion: v1
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml
kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7b59f7d4df-7mcvs   1/1     Running   0          27h
pod/kubernetes-dashboard-5dbf55bd9d-r8q6t        1/1     Running   0          27h

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   <none>        8000/TCP        27h
service/kubernetes-dashboard        NodePort    <none>        443:30001/TCP   27h

# Create a service account and bind the default cluster admin administrator cluster role
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

# Login home page token

kubectl delete serviceaccount dashboard-admin  -n kubernetes-dashboard
kubectl delete clusterrolebinding dashboard-admin
 Using output token Sign in Dashboard. 

Access address: https://NodeIP:30001, here it is

If chrome shows that your connection is not a private connection, you can directly enter this sun safe on this page. The following interface will appear.

Posted on Wed, 03 Nov 2021 04:04:55 -0400 by LordShryku