CentOS7 installs kubernetes 1.18.1 and uses flannel;

1, Overview

Building Kubernetes cluster by hand is a very tedious thing. In order to simplify these operations, there are many installation and configuration tools, such as kubedm, kubespreay, RKE and other components. I finally chose the official kubedm mainly because there are some differences between different versions of Kubernetes, and the update and support of kubedm will be better. Kubedm is a tool provided by Kubernetes to quickly install and initialize Kubernetes cluster. At present, it is still in the state of incubation and development. With the release of each new version of Kubernetes, it will be updated synchronously. It is strongly recommended to read the official documents first to understand the function of each component and object.



When creating a Kubernetes cluster, Alibaba cloud container service provides two types of network plug-ins: Terway and Flannel.

Flannel: it uses the Flannel CNI plug-in of a simple and stable community. With the high-speed network of alicloud's VPC, it can give the cluster a high-performance and stable container network experience. However, it has simple functions and few supported features. For example, it does not support the Network Policy based on the Kubernetes standard.
Terway: it is a network plug-in developed by Alibaba cloud container service. It allocates Alibaba cloud elastic network card to the container. It supports the definition of access policies between containers based on the Kubernetes standard's NetworkPolicy, and supports the bandwidth limitation of a single container. For users who do not need to use Network Policy, you can select Flannel, and in other cases, it is recommended to select Terway.


Therefore, this paper mainly introduces the simple use of flannel.


System environment

system kernel docker ip host name to configure
centos 7.6 3.10.0-957.el7.x86_64 19.03.5 k8s-master 2-core 4G
centos 7.6 3.10.0-957.el7.x86_64 19.03.5 k8s-node01 2-core 4G
centos 7.6 3.10.0-957.el7.x86_64 19.03.5 k8s-node02 2-core 4G






Note: please make sure that the CPU is at least 2 cores and the memory is 2G

2, Preparations

Turn off firewall

If each host has firewall enabled and needs to open the ports required by various components of Kubernetes, you can check the "Check required ports" section in the Installing kubeadm. For simplicity, disable the firewall at each node:

systemctl stop firewalld
systemctl disable firewalld



#Temporarily disabled
setenforce 0
#Permanently disabled
vim /etc/selinux/config    #Or modify / etc/sysconfig/selinux


Modify k8s.conf file

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system


Turn off swap

#Temporary closure
swapoff -a


Modify the / etc/fstab file, and comment out the auto mount of SWAP (permanently shut down SWAP and take effect after restart)

#Comment out the following fields
/dev/mapper/cl-swap     swap                    swap    defaults        0 0


Install docker

No more, please refer to the link:



Modify host name

hostnamectl set-hostname k8s-master

Note: the host name cannot be underlined, but can only be underlined in the middle
Otherwise, k8s will report an error

could not convert cfg to an internal cfg: nodeRegistration.name: Invalid value: "k8s_master": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')


3, Install kubeadm, kubelet, kubectl

Install kubeadm, kubelet and kubectl in each node

Modify yum installation source

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

Install software

The latest version is: 1.18.1

yum install -y kubelet-1.18.1-0 kubeadm-1.18.1-0 kubectl-1.18.1-0
systemctl enable kubelet && systemctl start kubelet

The above is the part that both the master and the node need to operate.


4, Initialize Master node

Run initialization command

kubeadm init --kubernetes-version=1.18.1 \
--apiserver-advertise-address= \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr= \

Note: change the apiserver advertisement address to the master node ip


Parameter interpretation:

– kubernetes version: used to specify k8s version;
– apiserver advertisement address: used to specify the IP address that the Kube apiserver listens to, that is, the master local IP address.
– Pod network CIDR: used to specify the network range of Pod;
 – service CIDR: used to specify the network range of SVC;
– image repository: specify the alicloud image warehouse address


This step is critical, because kubeadm defaults to k8s grc.io Download the required image, which cannot be accessed in China. Therefore, you need to specify the Alibaba cloud image warehouse address through - image repository

After the cluster is initialized successfully, the following information is returned:
Record the last part of the generation, which needs to be executed when other nodes join the Kubernetes cluster.
The output is as follows:

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token rkt1p6.3m7z7pqfvaehxmxi \    --discovery-token-ca-cert-hash sha256:dd384c51b5a38cce275dd3e178f6f1601b644f5fc2bc2f8cee9c2b031b119143

Pay attention to keep the kubeadm join, which will be used later.


Configure kubectl tools

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Installing flannel

mkdir k8s
cd k8swget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

If "Network": "" in yml is different from kubeadm init XXX -- pod network CIDR, it needs to be modified to be the same. Otherwise, the Cluster IP between nodes may not work.

Because my kubeadm init xxx -- pod network CIDR above is So this yaml file doesn't need to be changed.


View the images yaml needs

# cat kube-flannel.yml |grep image|uniq
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x

Note: these images need to visit Google.


However, there are some in the ACR of Alibaba cloud container image service. The access link is as follows:


Note: you must use an alicloud account to log in.

Click management console


Click image search on the left



Enter keywords: flannel:v0.12.0-amd64 , select the second.



Click Copy public address


Since I am in Shanghai, the domain name is: registry.cn-shanghai. If you choose another region, the domain name will be different.

Here is the version number we need.


The complete command to download the image is:

docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64

Other images, too, are here.

Therefore, the required image download commands are:

docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm64
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-ppc64le
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-s390x


tag the image, just like the yaml file.

docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm64
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-arm
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-ppc64le
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-s390x quay.io/coreos/flannel:v0.12.0-s390x

Note: these images also need to be executed on the node node.


Load flannel

kubectl apply -f kube-flannel.yml


View Pod status

Wait a few minutes to make sure all pods are Running

# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE    IP                NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-7ff77c879f-8jpb7             0/1     Running   0          125m        k8s-master   <none>           <none>kube-system   coredns-7ff77c879f-9gcjr             1/1     Running   0          125m        k8s-master   <none>           <none>kube-system   etcd-k8s-master                      1/1     Running   0          125m   k8s-master   <none>           <none>kube-system   kube-apiserver-k8s-master            1/1     Running   0          125m   k8s-master   <none>           <none>kube-system   kube-controller-manager-k8s-master   1/1     Running   0          125m   k8s-master   <none>           <none>kube-system   kube-flannel-ds-amd64-gz8jd          1/1     Running   0          27s   k8s-master   <none>           <none>kube-system   kube-proxy-wh548                     1/1     Running   0          125m   k8s-master   <none>           <none>kube-system   kube-scheduler-k8s-master            1/1     Running   0          125m   k8s-master   <none>           <none>

Note: the network segment of coredns container is


Set startup

systemctl enable kubelet


Command Completion

(master only)

yum install -y bash-completion

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
source  ~/.bashrc

You have to log out once and log in again


5, node join cluster


Please check the preparations above to make sure they are carried out!!!

Modify the host name to k8s-node01

hostnamectl set-hostname k8s-node01


Join node

Log in to the node node and make sure docker and kubeadm, kubelet, kubectl are installed

kubeadm join --token rkt1p6.3m7z7pqfvaehxmxi \    --discovery-token-ca-cert-hash sha256:dd384c51b5a38cce275dd3e178f6f1601b644f5fc2bc2f8cee9c2b031b119143


Set startup

systemctl enable kubelet


View nodes

Log in to the master and use the command to view

# kubectl get nodes -o wideNAME         STATUS     ROLES    AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-master   Ready      master   128m   v1.18.1   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.8k8s-node01   Ready      <none>   30s    v1.18.1   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.8k8s-node02   NotReady   <none>   19s    v1.18.1   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.8


Log in to the node node and view the ip address

# ifconfig ...
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet  netmask  broadcast
        inet6 fe80::80e7:9eff:fe5d:3d94  prefixlen 64  scopeid 0x20<link>
        ether 82:e7:9e:5d:3d:94  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0...

It will create a flannel.1 network card for flannel network communication.


7, Publish apps using yml

Take flaskapp as an example



apiVersion: apps/v1
kind: Deployment
  name: flaskapp-1
      run: flaskapp-1
  replicas: 1
        run: flaskapp-1
      - name: flaskapp-1
        image: jcdemo/flaskapp
        - containerPort: 5000



apiVersion: v1
kind: Service
  name: flaskapp-1
    run: flaskapp-1
  type: NodePort
  - port: 5000
    name: flaskapp-port
    targetPort: 5000
    protocol: TCP
    nodePort: 30005
    run: flaskapp-1


Load yml file

kubectl apply -f flaskapp-service.yaml 
kubectl apply -f flaskapp-deployment.yaml


View pod status

# kubectl get pods -o wideNAME                          READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
flaskapp-1-5d96dbf59b-94v6v   1/1     Running   0          33s   k8s-node02   <none>           <none>

Wait a few minutes to make sure it's Running


ping pod ip

# ping -c 3PING ( 56(84) bytes of data.64 bytes from icmp_seq=1 ttl=63 time=1.86 ms64 bytes from icmp_seq=2 ttl=63 time=0.729 ms64 bytes from icmp_seq=3 ttl=63 time=1.05 ms--- ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.729/1.215/1.862/0.477 ms

Make sure to ping normally, which means that the flannel network is normal.


Visit page

Access with master ip+nodeport


The effect is as follows:



Note: the node ip+nodeport can also be accessed.

Tags: Kubernetes network Docker kubelet

Posted on Sat, 23 May 2020 03:57:28 -0400 by new7media