Set up a complete K8S cluster-------Based on entOS 8 system

Create three centos nodes: k8s-master k8s-nnode1 k8s-nnode2

View centos system version

# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core) 
Note: Step 1~Step 8, all nodes need to operate, steps 9 and 10 Master Node operation, step 11 Node Node operation.
If 9,10,11 Step operation failed by kubeadm reset Command to clean up the environment and reinstall.

1. Close the firewall

# systemctl stop firewalld

2. Close selinux


# setenforce 0


3. Close swap

# Nano/etc/fstab, comment out swap mount line to permanently close swap partition

Remarks: k8s Running must be turned off swap partition

# swapoff -a


4. Add the relationship between hostname and IP

# nano /etc/hosts Add the following: k8s-master k8s-nnode1 k8s-nnode2

5. Chain to transfer bridged IPV4 traffic to iptables


# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

# sysctl --system


6. Install docker


Uninstall Old docker:

# sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
# sudo yum install -y yum-utils \
  device-mapper-persistent-data \
# sudo yum-config-manager \
    --add-repo \
# sudo yum install -y docker-ce-3:19.03.15-3.el8 docker-ce-cli-1:19.03.15-3.el8
# docker --version
Docker version 19.03.15, build 99e3ed8919
modify Cgroupfs by Systemd(docker File Driven By Default cgroupfs Change to systemd,and k8s Consistent Avoidance conflict): 
# cd /etc/
# mkdir docker
# cd docker # nano daemon.json
or # cd /etc/ && mkdir docker && cd docker && nano daemon.json
#Write in
  "exec-opts": ["native.cgroupdriver=systemd"]
Set startup:

# systemctl enable docker && systemctl start docker

  View file drivers:

# docker info | grep Driver
Storage Driver: overlay2
Logging Driver: json-file
Cgroup Driver: cgroupfs


 install tc
# yum install tc -y

7.Kubernetes yum source configuration:

# Nano/etc/yum.repos.d/kubernetes.repo, add the following files:

name=Kubernetes Repo





8. Install k8s


yum -y install kubelet-1.18.5 kubeadm-1.18.5 kubectl-1.18.5 --disableexcludes=kubernetes


Set up k8s Start Up

# systemctl enable kubelet

start-up k8s Backstage daemon

# systemctl start kubelet

9. Deploy Kubernetes Master


Draw from other repositories in DockerHub
After searching the Internet for half a day, many people said they pulled and renamed one by one from another warehouse, but the names of these tutorials warehouses are different. Some of the tutorials are very old and the warehouses have not been updated for many years. Here they are taught to fish directly, so they can learn how to find the warehouses themselves.
And one by one pull Rename is too tired to write a script.


The process is as follows:


First use the following command to get the desired docker image name:


# kubeadm config images list

Note: The new version of coredns has been renamed to coredns/coredns, remember to change it in images


First, to see where to pull it, you can go to docker hub and search for components like kube-proxy
Enter dockerhub search:
Sorted by recent updates, the results are as follows: A repository with 10k+ downloads and frequent updates can be found:





Then start scripting:

# cd /etc/
# mkdir k8s
# cd k8s
# nano
or cd /etc/ && mkdir k8s && cd k8s && nano
set -o errexit
set -o nounset
set -o pipefail

##Define the version here, change the version number according to the list above


##This is the original warehouse name, which needs to be renamed last

##Here's the warehouse you want to use

##Here's a list of mirrors. The new version will change coredns to coredns/coredns

##Here is a looping statement to pull and rename
for imageName in ${images[@]} ; do
  docker pull $DOCKERHUB_URL/$imageName
  docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName
  docker rmi $DOCKERHUB_URL/$imageName

Then grant execute permission

# chmod +x ./


Perform Initialization
kubeadm init  \
 --kubernetes-version=1.18.5  \
 --apiserver-advertise-address=   \
 --service-cidr= --pod-network-cidr=

After executing the command, you get it and record the last part of the generated content that needs to be executed before the other nodes join the Kubernetes cluster.

  kubeadm join --token n1anmw.ubhpjr33jdncdg5b \
     --discovery-token-ca-cert-hash sha256:372c1db40560d9abc307f3882718cfd66d2773bcb377ea60d6cd60eb52717122

As prompted after init,

Add a normal system user named k8s and set it to prevent sudo
kubectl configuration method prompted when performing kubeadm initialization master configuration

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

View the docker image:

# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE                v1.18.5             a1daed4e2b60        14 months ago       117MB   v1.18.5             8d69eaf196dc        14 months ago       162MB            v1.18.5             08ca24f16874        14 months ago       173MB            v1.18.5             39d887c6621d        14 months ago       95.3MB                     3.2                 80d28bedfe5d        18 months ago       683kB                   1.6.7               67da37a9a360        19 months ago       43.8MB                      3.4.3-0             303ce5db0e90        22 months ago       288MB

Since kube-apiserver only starts Secure Access Interface 6443 by default and does not start Non-Installed Access Interface 8080, kubectl accesses k8s kubelet through port 8080, modify the configuration file to support port 8080 access:


# nano /etc/kubernetes/manifests/kube-apiserver.yaml
 hold–insecure-port=0 Modify to:
Add or modify

  # systemctl restart kubelet

  # sysctl net.bridge.bridge-nf-call-iptables=1
  # kubectl get node

k8s-master NotReady master 11m v1.18.5





10. Install calico network


# yum install -y wget
# wget
# kubectl apply -f calico.yaml

  #  Kubectl get pods-n kube-system Look at it at intervals, STATUS becomes ContainerCreating or Running




# kubectl get node,At this point the state becomes Ready



At this point, the k8s master node has been created.


11.Node nodes join the cluster (at k8s-nnode1 and k8s-nnode2)


# yum install -y wget
# wget
# wget
# docker load -i   1-18-pause.tar.gz
# docker load -i 1-18-kube-proxy.tar.gz


  Add a new node to the cluster and execute the kubeadm join command output at kubeadm init:
Copy the above command and execute it on the node
Execute what you just got in k8s-nnode1 and k8s-nnode2:

#   kubeadm join --token n1anmw.ubhpjr33jdncdg5b \
     --discovery-token-ca-cert-hash sha256:372c1db40560d9abc307f3882718cfd66d2773bcb377ea60d6cd60eb52717122


Then all:

kubectl -s get nodes




mater See pod

kubectl get pods  kube-proxy-7jmxj  -n kube-system -o wide
kubectl get pods -n kube-system -o wide

node View Connection Log journalctl
-f -u kubelet

If Node The status of is NotReady,And log output appears Unable to update cni config: No networks found in /etc/cni/net.d

  implement scp -r /etc/cni


iptables --flush
iptables -tnat --flush
systemctl stop firewalld
systemctl disable firewalld
systemctl restart docker
systemctl restart kubelet






Tags: Kubernetes

Posted on Sat, 27 Nov 2021 12:54:35 -0500 by strangesoul