Building Kubernetes cluster by hand is a very tedious thing. In order to simplify these operations, there are many installation and configuration tools, such as kubedm, kubespreay, RKE and other components. I finally chose the official kubedm mainly because there are some differences between different versions of Kubernetes, and the update and support of kubedm will be better. Kubedm is a tool provided by Kubernetes to quickly install and initialize Kubernetes cluster. At present, it is still in the state of incubation and development. With the release of each new version of Kubernetes, it will be updated synchronously. It is strongly recommended to read the official documents first to understand the function of each component and object.
https://kubernetes.io/docs/concepts/
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
When creating a Kubernetes cluster, Alibaba cloud container service provides two types of network plug-ins: Terway and Flannel.
Flannel: it uses the Flannel CNI plug-in of a simple and stable community. With the high-speed network of alicloud's VPC, it can give the cluster a high-performance and stable container network experience. However, it has simple functions and few supported features. For example, it does not support the Network Policy based on the Kubernetes standard.
Terway: it is a network plug-in developed by Alibaba cloud container service. It allocates Alibaba cloud elastic network card to the container. It supports the definition of access policies between containers based on the Kubernetes standard's NetworkPolicy, and supports the bandwidth limitation of a single container. For users who do not need to use Network Policy, you can select Flannel, and in other cases, it is recommended to select Terway.
Therefore, this paper mainly introduces the simple use of flannel.
System environment
system kernel docker ip host name to configure centos 7.6 3.10.0-957.el7.x86_64 19.03.5 192.168.128.130 k8s-master 2-core 4G centos 7.6 3.10.0-957.el7.x86_64 19.03.5 192.168.128.131 k8s-node01 2-core 4G centos 7.6 3.10.0-957.el7.x86_64 19.03.5 192.168.128.132 k8s-node02 2-core 4G
Note: please make sure that the CPU is at least 2 cores and the memory is 2G
2, PreparationsTurn off firewall
If each host has firewall enabled and needs to open the ports required by various components of Kubernetes, you can check the "Check required ports" section in the Installing kubeadm. For simplicity, disable the firewall at each node:
systemctl stop firewalld systemctl disable firewalld
Disable SELINUX
#Temporarily disabled setenforce 0 #Permanently disabled vim /etc/selinux/config #Or modify / etc/sysconfig/selinux SELINUX=disabled
Modify k8s.conf file
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
Turn off swap
#Temporary closure swapoff -a
Modify the / etc/fstab file, and comment out the auto mount of SWAP (permanently shut down SWAP and take effect after restart)
#Comment out the following fields /dev/mapper/cl-swap swap swap defaults 0 0
Install docker
No more, please refer to the link:
https://www.cnblogs.com/xiao987334176/p/11771657.html
Modify host name
hostnamectl set-hostname k8s-master
Note: the host name cannot be underlined, but can only be underlined in the middle
Otherwise, k8s will report an error
could not convert cfg to an internal cfg: nodeRegistration.name: Invalid value: "k8s_master": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
3, Install kubeadm, kubelet, kubectl
Install kubeadm, kubelet and kubectl in each node
Modify yum installation source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Install software
The latest version is: 1.18.1
yum install -y kubelet-1.18.1-0 kubeadm-1.18.1-0 kubectl-1.18.1-0 systemctl enable kubelet && systemctl start kubelet
The above is the part that both the master and the node need to operate.
4, Initialize Master node
Run initialization command
kubeadm init --kubernetes-version=1.18.1 \ --apiserver-advertise-address=192.168.128.130 \ --image-repository registry.aliyuncs.com/google_containers \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16
Note: change the apiserver advertisement address to the master node ip
Parameter interpretation:
– kubernetes version: used to specify k8s version; – apiserver advertisement address: used to specify the IP address that the Kube apiserver listens to, that is, the master local IP address. – Pod network CIDR: used to specify the network range of Pod; 10.244.0.0/16 – service CIDR: used to specify the network range of SVC; – image repository: specify the alicloud image warehouse address
This step is critical, because kubeadm defaults to k8s grc.io Download the required image, which cannot be accessed in China. Therefore, you need to specify the Alibaba cloud image warehouse address through - image repository
After the cluster is initialized successfully, the following information is returned:
Record the last part of the generation, which needs to be executed when other nodes join the Kubernetes cluster.
The output is as follows:
... Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.128.130:6443 --token rkt1p6.3m7z7pqfvaehxmxi \ --discovery-token-ca-cert-hash sha256:dd384c51b5a38cce275dd3e178f6f1601b644f5fc2bc2f8cee9c2b031b119143
Pay attention to keep the kubeadm join, which will be used later.
Configure kubectl tools
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installing flannel
mkdir k8s cd k8swget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
If "Network": "10.244.0.0/16" in yml is different from kubeadm init XXX -- pod network CIDR, it needs to be modified to be the same. Otherwise, the Cluster IP between nodes may not work.
Because my kubeadm init xxx -- pod network CIDR above is 10.244.0.0/16. So this yaml file doesn't need to be changed.
View the images yaml needs
# cat kube-flannel.yml |grep image|uniq image: quay.io/coreos/flannel:v0.12.0-amd64 image: quay.io/coreos/flannel:v0.12.0-arm64 image: quay.io/coreos/flannel:v0.12.0-arm image: quay.io/coreos/flannel:v0.12.0-ppc64le image: quay.io/coreos/flannel:v0.12.0-s390x
Note: these images need to visit Google.
However, there are some in the ACR of Alibaba cloud container image service. The access link is as follows:
https://www.aliyun.com/product/acr
Note: you must use an alicloud account to log in.
Click management console
Click image search on the left
Enter keywords: flannel:v0.12.0-amd64 , select the second.
Click Copy public address
Since I am in Shanghai, the domain name is: registry.cn-shanghai. If you choose another region, the domain name will be different.
Here is the version number we need.
The complete command to download the image is:
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64
Other images, too, are here.
Therefore, the required image download commands are:
docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64 docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm64 docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-ppc64le docker pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-s390x
tag the image, just like the yaml file.
docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64 docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm64 docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-arm docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-ppc64le docker tag registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-s390x quay.io/coreos/flannel:v0.12.0-s390x
Note: these images also need to be executed on the node node.
Load flannel
kubectl apply -f kube-flannel.yml
View Pod status
Wait a few minutes to make sure all pods are Running
# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-7ff77c879f-8jpb7 0/1 Running 0 125m 10.244.0.3 k8s-master <none> <none>kube-system coredns-7ff77c879f-9gcjr 1/1 Running 0 125m 10.244.0.2 k8s-master <none> <none>kube-system etcd-k8s-master 1/1 Running 0 125m 192.168.128.130 k8s-master <none> <none>kube-system kube-apiserver-k8s-master 1/1 Running 0 125m 192.168.128.130 k8s-master <none> <none>kube-system kube-controller-manager-k8s-master 1/1 Running 0 125m 192.168.128.130 k8s-master <none> <none>kube-system kube-flannel-ds-amd64-gz8jd 1/1 Running 0 27s 192.168.128.130 k8s-master <none> <none>kube-system kube-proxy-wh548 1/1 Running 0 125m 192.168.128.130 k8s-master <none> <none>kube-system kube-scheduler-k8s-master 1/1 Running 0 125m 192.168.128.130 k8s-master <none> <none>
Note: the network segment of coredns container is 10.244.0.0/16
Set startup
systemctl enable kubelet
Command Completion
(master only)
yum install -y bash-completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc source ~/.bashrc
You have to log out once and log in again
5, node join cluster
preparation
Please check the preparations above to make sure they are carried out!!!
Modify the host name to k8s-node01
hostnamectl set-hostname k8s-node01
Join node
Log in to the node node and make sure docker and kubeadm, kubelet, kubectl are installed
kubeadm join 192.168.128.130:6443 --token rkt1p6.3m7z7pqfvaehxmxi \ --discovery-token-ca-cert-hash sha256:dd384c51b5a38cce275dd3e178f6f1601b644f5fc2bc2f8cee9c2b031b119143
Set startup
systemctl enable kubelet
View nodes
Log in to the master and use the command to view
# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready master 128m v1.18.1 192.168.128.130 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.8k8s-node01 Ready <none> 30s v1.18.1 192.168.128.131 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.8k8s-node02 NotReady <none> 19s v1.18.1 192.168.128.132 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.8
Log in to the node node and view the ip address
# ifconfig ... flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::80e7:9eff:fe5d:3d94 prefixlen 64 scopeid 0x20<link> ether 82:e7:9e:5d:3d:94 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0...
It will create a flannel.1 network card for flannel network communication.
7, Publish apps using yml
Take flaskapp as an example
flaskapp-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: flaskapp-1 spec: selector: matchLabels: run: flaskapp-1 replicas: 1 template: metadata: labels: run: flaskapp-1 spec: containers: - name: flaskapp-1 image: jcdemo/flaskapp ports: - containerPort: 5000
flaskapp-service.yaml
apiVersion: v1 kind: Service metadata: name: flaskapp-1 labels: run: flaskapp-1 spec: type: NodePort ports: - port: 5000 name: flaskapp-port targetPort: 5000 protocol: TCP nodePort: 30005 selector: run: flaskapp-1
Load yml file
kubectl apply -f flaskapp-service.yaml kubectl apply -f flaskapp-deployment.yaml
View pod status
# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES flaskapp-1-5d96dbf59b-94v6v 1/1 Running 0 33s 10.244.2.2 k8s-node02 <none> <none>
Wait a few minutes to make sure it's Running
ping pod ip
# ping 10.244.2.2 -c 3PING 10.244.2.2 (10.244.2.2) 56(84) bytes of data.64 bytes from 10.244.2.2: icmp_seq=1 ttl=63 time=1.86 ms64 bytes from 10.244.2.2: icmp_seq=2 ttl=63 time=0.729 ms64 bytes from 10.244.2.2: icmp_seq=3 ttl=63 time=1.05 ms--- 10.244.2.2 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.729/1.215/1.862/0.477 ms
Make sure to ping normally, which means that the flannel network is normal.
Visit page
Access with master ip+nodeport
http://192.168.128.130:30005/
The effect is as follows:
Note: the node ip+nodeport can also be accessed.