Set up a complete K8S cluster-------Based on entOS 8 system

Create three centos nodes:

192.168.5.141 k8s-master
192.168.5.142 k8s-nnode1
192.168.5.143 k8s-nnode2

View centos system version

# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core) 
Note: Step 1~Step 8, all nodes need to operate, steps 9 and 10 Master Node operation, step 11 Node Node operation.
If 9,10,11 Step operation failed by kubeadm reset Command to clean up the environment and reinstall.

1. Close the firewall

# systemctl stop firewalld

2. Close selinux

 

# setenforce 0

 

3. Close swap

# Nano/etc/fstab, comment out swap mount line to permanently close swap partition

Remarks: k8s Running must be turned off swap partition

# swapoff -a

 

4. Add the relationship between hostname and IP

# nano /etc/hosts Add the following:

192.168.5.141 k8s-master
192.168.5.142 k8s-nnode1
192.168.5.143 k8s-nnode2

5. Chain to transfer bridged IPV4 traffic to iptables

 

# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


# sysctl --system

 

6. Install docker

 

Uninstall Old docker:

# sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
# sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
# sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
# sudo yum install -y docker-ce-3:19.03.15-3.el8 docker-ce-cli-1:19.03.15-3.el8 containerd.io-1.3.9-3.1.el8
# docker --version
Docker version 19.03.15, build 99e3ed8919
modify Cgroupfs by Systemd(docker File Driven By Default cgroupfs Change to systemd,and k8s Consistent Avoidance conflict): 
# cd /etc/
# mkdir docker
# cd docker # nano daemon.json
or # cd /etc/ && mkdir docker && cd docker && nano daemon.json
#Write in
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
Set startup:

# systemctl enable docker && systemctl start docker

  View file drivers:

# docker info | grep Driver
Storage Driver: overlay2
Logging Driver: json-file
Cgroup Driver: cgroupfs

 

 install tc
# yum install tc -y

7.Kubernetes yum source configuration:

# Nano/etc/yum.repos.d/kubernetes.repo, add the following files:
[kubernetes]

name=Kubernetes Repo

baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-x86_64/

gpgcheck=0

enabled=1

 

8. Install k8s

 

yum -y install kubelet-1.18.5 kubeadm-1.18.5 kubectl-1.18.5 --disableexcludes=kubernetes

 

Set up k8s Start Up

# systemctl enable kubelet

start-up k8s Backstage daemon

# systemctl start kubelet

9. Deploy Kubernetes Master

 

Draw from other repositories in DockerHub
After searching the Internet for half a day, many people said they pulled and renamed one by one from another warehouse, but the names of these tutorials warehouses are different. Some of the tutorials are very old and the warehouses have not been updated for many years. Here they are taught to fish directly, so they can learn how to find the warehouses themselves.
And one by one pull Rename is too tired to write a script.

 

The process is as follows:

 

First use the following command to get the desired docker image name:

 

# kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

Note: The new version of coredns has been renamed to coredns/coredns, remember to change it in images

 

First, to see where to pull it, you can go to docker hub and search for components like kube-proxy
Enter dockerhub search:
https://hub.docker.com/search?q=kube-proxy&type=image
Sorted by recent updates, the results are as follows: A repository with 10k+ downloads and frequent updates can be found:

 

 

 

 

Then start scripting:

# cd /etc/
# mkdir k8s
# cd k8s
# nano pull_k8s_images.sh
or cd /etc/ && mkdir k8s && cd k8s && nano pull_k8s_images.sh
set -o errexit
set -o nounset
set -o pipefail

##Define the version here, change the version number according to the list above

KUBE_VERSION=v1.18.5
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
DNS_VERSION=1.6.7

##This is the original warehouse name, which needs to be renamed last
GCR_URL=k8s.gcr.io

##Here's the warehouse you want to use
DOCKERHUB_URL=gotok8s

##Here's a list of mirrors. The new version will change coredns to coredns/coredns
images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${DNS_VERSION}
)

##Here is a looping statement to pull and rename
for imageName in ${images[@]} ; do
  docker pull $DOCKERHUB_URL/$imageName
  docker tag $DOCKERHUB_URL/$imageName $GCR_URL/$imageName
  docker rmi $DOCKERHUB_URL/$imageName
done

Then grant execute permission

# chmod +x ./pull_k8s_images.sh

implement

./pull_k8s_images.sh
Perform Initialization
kubeadm init  \
 --kubernetes-version=1.18.5  \
 --apiserver-advertise-address=192.168.5.141   \
 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

After executing the command, you get it and record the last part of the generated content that needs to be executed before the other nodes join the Kubernetes cluster.

  kubeadm join 192.168.5.141:6443 --token n1anmw.ubhpjr33jdncdg5b \
     --discovery-token-ca-cert-hash sha256:372c1db40560d9abc307f3882718cfd66d2773bcb377ea60d6cd60eb52717122

As prompted after init,

Add a normal system user named k8s and set it to prevent sudo
kubectl configuration method prompted when performing kubeadm initialization master configuration

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

View the docker image:

# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.5             a1daed4e2b60        14 months ago       117MB
k8s.gcr.io/kube-controller-manager   v1.18.5             8d69eaf196dc        14 months ago       162MB
k8s.gcr.io/kube-apiserver            v1.18.5             08ca24f16874        14 months ago       173MB
k8s.gcr.io/kube-scheduler            v1.18.5             39d887c6621d        14 months ago       95.3MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        18 months ago       683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        19 months ago       43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        22 months ago       288MB

Since kube-apiserver only starts Secure Access Interface 6443 by default and does not start Non-Installed Access Interface 8080, kubectl accesses k8s kubelet through port 8080, modify the configuration file to support port 8080 access:

 

# nano /etc/kubernetes/manifests/kube-apiserver.yaml
 hold–insecure-port=0 Modify to:
–insecure-port=8080
Add or modify
-insecure-bind-address=0.0.0.0

  # systemctl restart kubelet

  # sysctl net.bridge.bridge-nf-call-iptables=1
  # kubectl get node

NAME       STATUS   ROLES  AGE VERSION
k8s-master NotReady master 11m v1.18.5

 

 

 

 

10. Install calico network

 

# yum install -y wget
# wget http://download.zhufunin.com/k8s_1.18/calico.yaml
# kubectl apply -f calico.yaml

  #  Kubectl get pods-n kube-system Look at it at intervals, STATUS becomes ContainerCreating or Running

 

 

  Re-execution

# kubectl get node,At this point the state becomes Ready

 

 

At this point, the k8s master node has been created.

 

11.Node nodes join the cluster (at k8s-nnode1 and k8s-nnode2)

 

# yum install -y wget
# wget http://download.zhufunin.com/k8s_1.18/1-18-pause.tar.gz
# wget http://download.zhufunin.com/k8s_1.18/1-18-kube-proxy.tar.gz
# docker load -i   1-18-pause.tar.gz
# docker load -i 1-18-kube-proxy.tar.gz

 

  Add a new node to the cluster and execute the kubeadm join command output at kubeadm init:
Copy the above command and execute it on the node
Execute what you just got in k8s-nnode1 and k8s-nnode2:

#   kubeadm join 192.168.5.141:6443 --token n1anmw.ubhpjr33jdncdg5b \
     --discovery-token-ca-cert-hash sha256:372c1db40560d9abc307f3882718cfd66d2773bcb377ea60d6cd60eb52717122

 

Then all:

kubectl -s http://192.168.5.138:8080 get nodes

 

 

 

mater See pod

kubectl get pods  kube-proxy-7jmxj  -n kube-system -o wide
kubectl get pods -n kube-system -o wide

node View Connection Log journalctl
-f -u kubelet

If Node The status of is NotReady,And log output appears Unable to update cni config: No networks found in /etc/cni/net.d

  implement scp -r 192.168.5.141:/etc/cni /etc/cni

 

iptables --flush
iptables -tnat --flush
systemctl stop firewalld
systemctl disable firewalld
systemctl restart docker
systemctl restart kubelet

 

 

 

 



    

Tags: Kubernetes

Posted on Sat, 27 Nov 2021 12:54:35 -0500 by strangesoul