kubeadm deployment k8s cluster

Experimental environment:master node: 192.168.1.10 (master)Node 1: 192.168.1.20 (node01)Node2 node: 192.168.1.30 (node02...

Experimental environment:
master node: 192.168.1.10 (master)
Node 1: 192.168.1.20 (node01)
Node2 node: 192.168.1.30 (node02)

Environmental preparation:
Name the three virtual machines, set the corresponding IP, write them to domain name resolution / etc/hosts, turn off the firewall, iptables, and disable selinux. And to do that, time must be the same. Disable swap all

Here we specify that the k8s version we installed is version 1.15.0. Dock deployment installation specified version 18.9.0
[root@localhost ~]# yum install -y docker-ce-18.09.0-3.el7 docker-ce-cli-18.09.0-3.el7 containerd.io-1.2.0-3.el7

[root@localhost ~]# hostnamectl set-hostname master //master node operation
[root@localhost ~]# hostnamectl set-hostname node01 //nod1 node operation
[root@localhost ~]# hostnamectl set-hostname node02 //node2 node operation
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@master ~]# iptables -F
[root@master ~]# iptables-save
[root@master ~]# vim /etc/selinux/config

#All 3 virtual machines disable swap
[root@master ~]# swapoff –a
[root@node01 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0

[root@master ~]# free -h

[root@master ~]# vim /etc/hosts

#Enable transmission without password
#Press enter for 3 consecutive points
[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id root@node01
[root@master ~]# ssh-copy-id root@node02

#Turn on iptables bridging
[root@master ~]# vim /etc/sysctl.d/k8s.conf

[root @ master ~] ා sysctl - P / etc / sysctl. D / k8s. Conf / / if you are prompted that there is no folder or directory, enter the following command
[root@master ~]# modprobe br_netfilter
[root @ master ~] (sysctl - P / etc / sysctl. D / k8s. Conf / / two node nodes also need to do
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

This basic environment is ready. First, operate on the master node
[root@master ~]# cd /etc/yum.repos.d/

[root@master yum.repos.d]# vi docker-ce.repo [docker-ce] name=docker-ce baseurl=https://download.docker.com/linux/centos/7/x86_64/stable/ enable=1 gpgcheck=0

[```
root@master yum.repos.d]# vim kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enable=1
gpgcheck=0
~

[root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node01:/etc/yum.repos.d/ [root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node02:/etc/yum.repos.d/ #Pay attention to download ##View docker version [root@master ~]# yum list docker-ce --showduplicates | sort –r [root@master yum.repos.d]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 [root@master ~]# systemctl enable docker [root@master ~]# systemctl enable kubelet

#Configure docker accelerator
[root@master ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

[root@master ~]# vi /etc/docker/daemon.json {"registry-mirrors": ["http://f1361db2.m.daocloud.io"]} [root@master ~]# systemctl daemon-reload [root@master ~]# systemctl restart docker [root@master ~]# rpm -ql kubelet /etc/kubernetes/manifests //List directory /etc/sysconfig/kubelet //configuration file /etc/systemd/system/kubelet.service /usr/bin/kubelet //At this point, the preparatory work is finished and initialization can be started. However, due to the limitations of the domestic network environment, we can't download the image directly from Google's image station. At this time, we need to manually download the image from docker image station, and then rename it, which is implemented by script here. docker pull mirrorgooglecontainers/kube-apiserver:v1.14.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.14.1 docker pull mirrorgooglecontainers/kube-proxy:v1.14.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.3.10 docker pull coredns/coredns:1.3.1 docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1 docker tag mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1 docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.3.10 docker rmi coredns/coredns:1.3.1 //Here I have downloaded it. I only need to import the image of the shadow. ![](https://s1.51cto.com/images/blog/202001/31/15ea2bf8ed379fb9b7e39b8b3f277809.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) [root@master images]# systemctl enable kubelet [root@master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap ![](https://s1.51cto.com/images/blog/202001/31/3ef96ff9c2d5f2e016ea0a6a7cfa974d.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) ![](https://s1.51cto.com/images/blog/202001/31/4d0482468b427e05a819e2f7d509cd28.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) **It can be seen that master Is not ready( NotReady),This state is due to the lack of an attachment flannel,No network Pod Can't communicate //You can also check the health status of components by** ![](https://s1.51cto.com/images/blog/202001/31/917fff812a293acd25581ea3980b7c43.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) ,Add network components( flannel) //The component flannel can be obtained from https://github.com/cores/flannel ![](https://s1.51cto.com/images/blog/202001/31/790957863f23ecfd017932813056920c.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml **It's not enough to see a lot of things created. You need to check them flannel Whether it is in the state of normal startup and operation is considered as the completion of the deployment in progress** ![](https://s1.51cto.com/images/blog/202001/31/05643601871197e8bc429e970af2b43f.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) ![](https://s1.51cto.com/images/blog/202001/31/dd9c50dfe61096645ead9b89fd14d7dc.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) [root@master ~]# kubectl get ns NAME STATUS AGE default Active 14m kube-public Active 14m kube-system Active 14m //The above is the installation and deployment of the primary node, then the installation of each node and joining the cluster [root@node01 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 [root@node01 ~]# systemctl enable docker kubelet //Before joining the cluster, we still need to manually download 2 images, which is faster. [root@node01 ~]# docker pull mirrorgooglecontainers/kube-proxy:v1.14.1 [root@node01 ~]# docker pull mirrorgooglecontainers/pause:3.1 [root@node01 ~]# docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1 [root@node01 ~]# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 [root@node01 ~]# docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1 [root@node01 ~]# docker rmi mirrorgooglecontainers/pause:3.1 [root@node01 ~]kubeadm join 192.168.1.110:6443 --token njus35.kw3hxkys3urmnuob --discovery-token-ca-cert-hash sha256:05761b73b571c18eebd6972fb70323cd3c4d8e0aa7514efa2680411310424184 ![](https://s1.51cto.com/images/blog/202001/31/fe13516f4788917d10a04179bd0fa10a.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) //Wait for a moment to go to the master node for verification. Waiting is to synchronize the flannel network. ![](https://s1.51cto.com/images/blog/202001/31/2ea606191e5700bf75b1a5cb254bc1eb.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) > **How to install the specified version kubenetes,Notice here, kubernetes The versions of are the same, which is mainly reflected in the unity of the downloaded components. Note that the components are > Kube-proxy,kube-apiserver,kube-controller-manager,kube-scheduler** > //List the rpm packages that have been installed yum list installed | grep kube //Uninstall the installed rpm package yum remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 -y > Install the specified kubeadm > yum install -y kubelet-1.12.1 kubeadm-1.12.1 kubectl-1.12.1 > > Set up kubectl Command line tools auto completion function > [root@k8s-master ~]# yum install -y bash-completion > [root@k8s-master ~]# source /usr/share/bash-completion/bash_completion > [root@k8s-master ~]# source <(kubectl completion bash) > [root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc //Set the number of tab spaces [root@master ~]# vim .vimrc ![](https://s1.51cto.com/images/blog/202001/31/43f6afc0c6fffa5445b9f6d31b639f6a.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) [root@master ~]# source .vimrc

1 February 2020, 11:08 | Views: 9628

Add new comment

For adding a comment, please log in
or create account

0 comments