Installing kubernetes using kubeadm_ v1.20.x

Installing kubernetes using kubeadm_ v1.20.x

catalogue

Official documents: https://kuboard.cn/install/history-k8s/install-k8s-1.20.x.html

Our deployment process here is:

  • 1. Kubedm init initialization. Modify the cri to containerd when the container is running. The crictl command cannot be completed. The usage of this command is similar to docker. I will write another article on the use of crictl later.

  • 2. Use calico network plug-in instead of flannel

  • 3. Install metrics server to obtain monitoring data

  • 4. Installing kubernetes dashboard makes the web interface more intuitive

  • 5.ingress-nginx

  • 6. After debugging, test the rbac permissions, and write a yaml file for the service account not used by different users

Container Runtime

  • From Kubernetes v1.20, the dependency of docker is removed by default. If docker and containerd are installed on the host, docker will be used as the container running engine first. If docker is not installed on the host, containerd will be used as the container running engine;
  • This paper uses containerd as the container to run the engine;

About binary installation

  • Kubedm is the installation method officially supported by kubernetes, and "binary" is not. This document uses the kubeadm tool officially recommended by kubernetes.io to install the kubernetes cluster.

Check centos / hostname

# Both the master node and the worker node need to be executed
cat /etc/redhat-release

# The output of hostname here will be the node name of the machine in the Kubernetes cluster
# You cannot use localhost as the name of a node
hostname

# Please use the lscpu command to check the CPU information
# Architecture: x86_64 this installation document does not support arm architecture
# CPU (s): the number of CPU cores cannot be less than 2
lscpu

Modify hostname

If you need to modify the hostname, execute the following command:

# Modify hostname
hostnamectl set-hostname your-new-host-name
# View modification results
hostnamectl status
# Set hostname resolution
echo "127.0.0.1   $(hostname)" >> /etc/hosts

Check network

Execute commands on all nodes

[root@demo-master-a-1 ~]$ ip route show
default via 172.21.0.1 dev eth0 
169.254.0.0/16 dev eth0 scope link metric 1002 
172.21.0.0/20 dev eth0 proto kernel scope link src 172.21.0.12 

[root@demo-master-a-1 ~]$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:12:a4:1b brd ff:ff:ff:ff:ff:ff
    inet 172.17.216.80/20 brd 172.17.223.255 scope global dynamic eth0
       valid_lft 305741654sec preferred_lft 305741654sec

IP address used by kubelet:

  • In the ip route show command, you can know the default network card of the machine, usually eth0, such as default via 172.21.0.23 dev eth0
  • In the ip address command, the ip address of the default network card can be displayed. Kubernetes will use this ip address to communicate with other nodes in the cluster, such as 172.17.216.80
  • The IP addresses used by Kubernetes on all nodes must be interoperable (no NAT mapping, no security group or firewall isolation)

Install containerd / kubelet / kubedm / kubectl

Use root to execute the following code on all nodes to install the software:

  • containerd
  • nfs-utils
  • kubectl / kubeadm / kubelet

Execute the following code manually, and the result is the same as the quick installation. Replace in line 79 (highlighted) of the script with the version number you need, such as 1.20.6

Please select any docker hub image according to your network

# Both the master node and the worker node need to be executed
# The last parameter, 1.20.6, specifies the kubenetes version, which supports all 1.20.x installations
# Alibaba cloud docker hub image

export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
  
#!/bin/bash

# Both the master node and the worker node need to be executed

# Install containerd
# Reference documents are as follows
# https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Setup required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# Apply sysctl params without reboot
sysctl --system

# Uninstall old version
yum remove -y containerd.io

# Set up yum repository
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# Install containerd
yum install -y containerd.io-1.4.3

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml

systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd

# Installing NFS utils
# nfs utils must be installed before nfs networked storage can be mounted
yum install -y nfs-utils
yum install -y wget

# Turn off firewall
systemctl stop firewalld
systemctl disable firewalld

# Close seLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# Close swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# Configure yum source for K8S
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# Uninstall old version
yum remove -y kubelet kubeadm kubectl

# Install kubelet, kubedm, kubectl
# Replace  with the kubernetes version number, for example, 1.20.6
yum install -y kubelet-${1} kubeadm-${1} kubectl-${1}
crictl config runtime-endpoint /run/containerd/containerd.sock

# Restart docker and start kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

containerd --version
kubelet --version


If executed at this time systemctl status kubelet Command, will get kubelet Error prompt of startup failure, please ignore this error because you must complete the following steps kubeadm init Operation of, kubelet To start normally.

Initialize master node

Environment variables used during initialization:

  • APISERVER_NAME cannot be the hostname of the master
  • APISERVER_NAME must be all lowercase letters, numbers and decimal points, and cannot contain minus signs
  • POD_ The network segment used by subnet cannot overlap the network segment where the master node / worker node is located. The value of this field is a CIDR value. If you are not familiar with the concept of CIDR, please still execute export POD_SUBNET=10.100.0.1/16 command, no modification.

Manually execute the following code, and the result is the same as quick initialization. Replace in line 21 (highlighted) of the script with the version number you need, such as 1.20.6

# Execute only on the master node
# Replace x.x.x.x with the intranet IP of the master node
# The export command is only valid in the current shell session. After opening a new shell window, if you want to continue the installation process, please re execute the export command here
export MASTER_IP=x.x.x.x

# Replace apiserver.demo with the dnsName you want
export APISERVER_NAME=apiserver.demo

# The network segment where the kubernetes container group is located. After installation, the network segment is created by kubernetes and does not exist in your physical network in advance
export POD_SUBNET=10.100.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
 
#!/bin/bash
# Execute only on the master node
# Terminate execution on script error
set -e
if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m Make sure you have set the environment variable POD_SUBNET and APISERVER_NAME \033[0m"
  echo current POD_SUBNET=$POD_SUBNET
  echo current APISERVER_NAME=$APISERVER_NAME
  exit 1
fi


# View full configuration options https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v${1}
imageRepository: registry.aliyuncs.com/k8sxio
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
dns:
  type: CoreDNS
  imageRepository: swr.cn-east-2.myhuaweicloud.com${2}
  imageTag: 1.8.0

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

# kubeadm init
# Depending on the network speed of your server, you need to wait 3 - 10 minutes
echo ""
echo "Please wait while the image is captured..."
kubeadm config images pull --config=kubeadm-config.yaml
echo ""
echo "initialization Master node"
kubeadm init --config=kubeadm-config.yaml --upload-certs

# Configure kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# Install calico network plug-in
# Reference documents https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo ""
echo "install calico-3.17.1"
rm -f calico-3.17.1.yaml
kubectl create -f https://kuboard.cn/install-script/v1.20.x/calico-operator.yaml
wget https://kuboard.cn/install-script/v1.20.x/calico-custom-resources.yaml
sed -i "s#192.168.0.0/16#${POD_SUBNET}#" calico-custom-resources.yaml
kubectl create -f calico-custom-resources.yaml

If the following error occurs:

[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.2
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.4.13-0
failed to pull image "swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0": output: time="2021-04-30T13:26:14+08:00" level=fatal 
msg="pulling image failed: rpc error: code = NotFound desc = failed to pull and unpack image \"swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0\": 
failed to resolve reference \"swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0\": 
swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0: not found", error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

Execute the following command:

Add the parameter / coredns at the end of the original command

curl -sSL https://kuboard.cn/install-script/v1.20.x/init_master.sh | sh -s 1.20.6 /coredns

Check the initialization result of the master

# Execute only on the master node
# Execute the following command and wait for 3-10 minutes until all container groups are in Running state
watch kubectl get pod -n kube-system -o wide

# View the initialization results of the master node
kubectl get nodes -o wide

Initialize worker node

Get join command parameters
Execute on the master node

# Execute only on the master node
kubeadm token create --print-join-command
 
Available kubeadm join Commands and parameters, as shown below
# Output of kubedm token create command
kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303

Should token The valid time of is 2 hours. Within 2 hours, you can use this token Initialize any number of worker Node.

# Initialize worker
 For all worker Node execution

# Execute only on the worker node
# Replace x.x.x.x with the intranet IP of the master node
export MASTER_IP=x.x.x.x
# Replace apiserver.demo with apiserver used when initializing the master node_ NAME
export APISERVER_NAME=apiserver.demo
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# Replace with the output of the kubedm token create command on the master node
kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303
 
# Check initialization results
 stay master Execute on node

# Execute only on the master node
kubectl get nodes -o wide
    
The output results are as follows:
[root@demo-master-a-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
demo-master-a-1   Ready    master   5m3s    v1.20.x
demo-worker-a-1   Ready    <none>   2m26s   v1.20.x
demo-worker-a-2   Ready    <none>   3m56s   v1.20.x

Install metrics server - K8s resource monitoring indicators

Reference link: https://blog.csdn.net/wangmiaoyan/article/details/102868728

K8S resource indicator acquisition tool: metrics server
Monitoring tools for user-defined indicators: prometheus, k8s prometheus adapter

prometheus: prometheus can collect resource indicators in various dimensions, such as CPU utilization, number of network connections, sending and receiving rate of network messages, including process creation and recovery rate, and can monitor many indicators. These indicators were not supported in the early k8s, so it is necessary to integrate various indicators collected by prometheus into k8s, K8s can judge whether it is necessary to scale the pod according to these indicators.

prometheus is used not only as a monitoring system, but also as a provider of some special resource indicators. However, these indicators are not standard K8S built-in indicators, which are called user-defined indicators. However, if prometheus wants to display the data collected by monitoring as indicators, it needs a plug-in, which is called K8S prometheus adapter. These indicators are the basic criteria for judging whether the pod needs to be scaled, such as cpu utilization and memory usage.

With the introduction of prometheus and k8s prometheus adapter, a new generation of k8s architecture has been formed.

K8S next generation architecture

Core index pipeline: composed of kubelet, metrics server and api provided by API server; Cumulative CPU utilization, real-time memory utilization, pod resource utilization and container disk utilization;

Monitoring pipeline: it is used to collect various indicator data from the system and provide it to end users, storage systems and HPA, including core indicators and many other non core indicators. Non core indicators themselves cannot be parsed by k8s. Therefore, k8s prometheus adapter is required to convert the data collected by prometheus into a format that k8s can understand and use for k8s.

Core indicator monitoring

heapster was used before, but it was abandoned after 1.12, and the replacement used later is metrics server; Metrics server is an api server developed by users. It is used for service resource indicators, not for service pod and deploy. Metrics server itself is not a part of k8s, but a pod running on k8s. If you want users to seamlessly use the api services provided by metrics server on k8s, you need to combine them in the new generation architecture. As shown in the figure, an aggregator is used to aggregate k8s api server and metrics server, and then obtained by the group / apis/metrics.k8s.io/v1beta1.

Figure 1

Later, if the user has other APIs, the server can be integrated into the aggregator to provide services, as shown in the figure.

Figure 2

Check the k8s default API version, and you can see that there is no kubectl API versions in the group metrics.k8s.io

When you deploy metrics server and check API versions, you can see the group metrics.k8s.io.

Deploying metrics server

Go to addons under cluster under kubernetes project, find the corresponding project and download it

[root@master bcia]# mkdir metrics-server -p 
[root@master bcia]# cd metrics-server/

# Download all files at once
[root@master metrics-server]# for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml ; do wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/metrics-server/$file;done    
--2019-11-02 10:18:10--  https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/metrics-server/auth-delegator.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.228.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 398 [text/plain]
Saving to: 'auth-delegator.yaml'

100%[==========================================================================>] 398         --.-K/s   in 0s
...ellipsis...

[root@master metrics-server]# ls
auth-delegator.yaml  metrics-apiservice.yaml         metrics-server-service.yaml
auth-reader.yaml     metrics-server-deployment.yaml  resource-reader.yaml

# Run all files at once
[root@master metrics-server]# kubectl apply -f .          
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
configmap/metrics-server-config created
deployment.apps/metrics-server-v0.3.6 created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

If an error is found after running, delete all at once and modify several places, as shown in the figure

  • 1.metrics-server-deployment.yaml
    Adding -- - kubelet secure TLS to the command of metrics server means that the certificate of the client is not verified. Note out port 10255, and then use 10250 to communicate through https. Write the specific values of cpu, memory and extra memory in the command of addon Resizer, and comment out minclustersize = {undefined {metrics_server_min_cluster_size}}

Figure 1

Figure 2

  • 2. Add nodes/stats to resource-reader.yaml, as shown in the figure

Figure 3

Test whether it can be used

  • 1. Check whether pods works normally
[root@master metrics-server]# kubectl get pods -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-8686dcc4fd-bzgss                 1/1     Running   0          9d
coredns-8686dcc4fd-xgd49                 1/1     Running   0          9d
etcd-master                              1/1     Running   0          9d
kube-apiserver-master                    1/1     Running   0          9d
kube-controller-manager-master           1/1     Running   0          9d
kube-flannel-ds-amd64-52d6n              1/1     Running   0          9d
kube-flannel-ds-amd64-k8qxt              1/1     Running   0          8d
kube-flannel-ds-amd64-lnss4              1/1     Running   0          9d
kube-proxy-4s5mf                         1/1     Running   0          8d
kube-proxy-b6szk                         1/1     Running   0          9d
kube-proxy-wsnfz                         1/1     Running   0          9d
kube-scheduler-master                    1/1     Running   0          9d
kubernetes-dashboard-76f6bf8c57-rncvn    1/1     Running   0          8d
metrics-server-v0.3.6-677d79858c-75vk7   2/2     Running   0          18m
tiller-deploy-57c977bff7-tcnrf           1/1     Running   0          7d20h
  • 2. Check API versions and you will see that there are more metrics.k8s.io/v1beta1

Figure 4

  • 3. Check node and pod monitoring indicators
[root@master metrics-server]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   145m         3%     1801Mi          11%
node2    697m         17%    12176Mi         77%
node3    838m         20%    12217Mi         77%
[root@master metrics-server]# kubectl top pods
NAME                                            CPU(cores)   MEMORY(bytes)
account-deploy-6d86f9df74-khv4v                 5m           444Mi
admin-deploy-55dcf4bc4d-srw8m                   2m           317Mi
backend-deploy-6f7bdd9bf4-w4sqc                 4m           497Mi
crm-deploy-7879694578-cngzp                     4m           421Mi
device-deploy-77768bf87c-ct5nc                  5m           434Mi
elassandra-0                                    168m         4879Mi
gateway-deploy-68c988676d-wnqsz                 4m           379Mi
jhipster-alerter-74fc8984c4-27bx8               1m           46Mi
jhipster-console-85556468d-kjfg6                3m           119Mi
jhipster-curator-67b58477b9-5f8br               1m           11Mi
jhipster-logstash-74878f8b49-mpn62              59m          860Mi
jhipster-zipkin-5b5ff7bdbc-bsxhk                1m           1571Mi
order-deploy-c4c846c54-2gxkp                    5m           440Mi
pos-registry-76bbd6c689-q5w2b                   442m         474Mi
recv-deploy-5dd686c947-v4qqh                    5m           424Mi
store-deploy-54c994c9b6-82b8z                   6m           493Mi
task-deploy-64c9984d88-fqxqq                    6m           461Mi
wiggly-cat-redis-ha-sentinel-655f7b5f9d-bbrz6   4m           4Mi
wiggly-cat-redis-ha-sentinel-655f7b5f9d-bj4bq   4m           5Mi
wiggly-cat-redis-ha-sentinel-655f7b5f9d-f9pdd   4m           5Mi
wiggly-cat-redis-ha-server-b58c8d788-6xlwk      3m           11Mi
wiggly-cat-redis-ha-server-b58c8d788-r949h      3m           8Mi
wiggly-cat-redis-ha-server-b58c8d788-w2gtb      3m           22Mi

At this point, the metrics server deployment is complete.

k8s deploy dashboard

reference resources: https://github.com/kubernetes/dashboard

# Download the yaml file for dashboard
wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

# The default Dashboard can only be accessed inside the cluster. Modify the Service to NodePort type and expose it to the outside
vim recommended.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml
kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7b59f7d4df-7mcvs   1/1     Running   0          27h
pod/kubernetes-dashboard-5dbf55bd9d-r8q6t        1/1     Running   0          27h

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.96.191.54   <none>        8000/TCP        27h
service/kubernetes-dashboard        NodePort    10.96.83.45    <none>        443:30001/TCP   27h

# Create a service account and bind the default cluster admin administrator cluster role
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

# Login home page token
eyJhbGciOiJSUzI1NiIsImtpZCI6IjYwM0dGMkdLcjhrQzg1ZjVpSC1wZVVQaDQzcTdPUWVKeS00Y05TazNteGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4ta2RuNzgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTBkZWRiOTMtZWJhOC00ZjdmLWE2NjUtMGMzMmExM2Q0ZTYzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.r7IlTPsIzp86eloi7XIh9OPV203pyrXyzLewFKtZFshqEA3FbxJ4T7FztRKTyD_tLVDxpBMVruCJ9vK3RhOV0E6SnX4Frf4dofZt6KeUzGq89nCLr4edYlmHzAcx56QLK9cYLFF2AOxYUh6CloyZbUhiiNET_OQzG68VT2tLvrSHiVELe4hriQFgEfwAe-P-jGy-2xlmbPb7nk0tCRKBe1BDCktC9FvdMqBtS9BN3sRSOoDGuNga5W5Db0r-DTNxOcn3IgHUsQasDK7IW-J-6Ju_sul5NQ9MPfjuN6rWeuDU1iDSC0m-lomXjjfgB_UZ1r7d4DmilOgHKPTbOEE-zg


kubectl delete serviceaccount dashboard-admin  -n kubernetes-dashboard
kubectl delete clusterrolebinding dashboard-admin
 Using output token Sign in Dashboard. 

Access address: https://NodeIP:30001, here it is https://192.168.172.31:30001/#/login

If chrome shows that your connection is not a private connection, you can directly enter this sun safe on this page. The following interface will appear.

Posted on Wed, 03 Nov 2021 04:04:55 -0400 by LordShryku