Using kubeadm to build k8s cluster
1, Environmental preparation:
There are two Centos7 hosts, 166(master)167(node01). docker is installed on both hosts. The following operations are performed on both hosts.
Modify the contents of the / etc/hosts file
[zjin@master ~]$ cat /etc/hosts 10.3.4.166 master 10.3.4.167 node01
Disable firewall
[zjin@master ~]$ sudo systemctl stop firewalld [zjin@master ~]$ sudo systemctl disable firewalld
Turn off selinux
cat /etc/selinux/config SELINUX=disabled
Create the / etc/sysctl.d/k8s.conf file
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
Then execute the following command:
[zjin@master ~]$ sudo modprobe br_netfilter [zjin@master ~]$ sudo sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
2, Pull image:
On the master:
docker pull akipa11/kube-apiserver-amd64:v1.10.0 docker pull akipa11/kube-scheduler-amd64:v1.10.0 docker pull akipa11/kube-controller-manager-amd64:v1.10.0 docker pull akipa11/kube-proxy-amd64:v1.10.0 docker pull akipa11/k8s-dns-kube-dns-amd64:1.14.8 docker pull akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull akipa11/k8s-dns-sidecar-amd64:1.14.8 docker pull akipa11/etcd-amd64:3.1.12 docker pull akipa11/flannel:v0.10.0-amd64 docker pull akipa11/pause-amd64:3.1 docker tag akipa11/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 docker tag akipa11/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 docker tag akipa11/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 docker tag akipa11/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag akipa11/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag akipa11/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag akipa11/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag akipa11/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag akipa11/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
On node01:
docker pull akipa11/kube-proxy-amd64:v1.10.0 docker pull akipa11/flannel:v0.10.0-amd64 docker pull akipa11/pause-amd64:3.1 docker pull akipa11/kubernetes-dashboard-amd64:v1.8.3 docker pull akipa11/heapster-influxdb-amd64:v1.3.3 docker pull akipa11/heapster-grafana-amd64:v4.4.3 docker pull akipa11/heapster-amd64:v1.4.2 docker pull akipa11/k8s-dns-kube-dns-amd64:1.14.8 docker pull akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull akipa11/k8s-dns-sidecar-amd64:1.14.8 docker tag akipa11/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag akipa11/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker tag akipa11/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag akipa11/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag akipa11/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag akipa11/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag akipa11/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker tag akipa11/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 docker tag akipa11/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3 docker tag akipa11/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
3, Install kubeadm, kubelet, kubectl
1. Configure yum source:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
2. Install kubeadm, kubelet, kubectl
The versions we installed here are all 1.10.0-0
$ yum makecache fast $ yum install -y kubelet-1.10.0-0 $ yum install -y kubectl-1.10.0-0 $ yum install -y kubeadm-1.10.0-0
3. Configure kubelet
-
Modify parameters of CGroup driver
Modify the configuration file / etc / system D / system / kubelet.service.d/10-kubeladm.conf of the file kubelet, and change the kubelet_cgroup_argsparameter to cgroupfs
-
Add configuration parameters of swap
Before ExecStart, add the following:
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Finally, reload the configuration file
systemctl daemon-reload
4, Cluster installation initialization
Execute the following command on the master:
[zjin@master ~]$ sudo kubeadm init \ > --kubernetes-version=v1.10.0 \ > --pod-network-cidr=10.244.0.0/16 \ > --apiserver-advertise-address=10.3.4.166 \ > --ignore-preflight-errors=Swap
Cluster initialization command: kubeadm init, the later parameter is the cluster version to be installed. Because we select flannel as the network plug-in of Pod, we need to specify - Pod network CIDR = 10.244.0.0/16, then the communication address of apiserver. Here is the IP address of our master node, and - ignore preflight errors = swap means ignore swap Error message for.
Finally, we can see the successful information of cluster installation:
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.3.4.166:6443 --token b9ftqo.6a3igsfxq96b1dt6 --discovery-token-ca-cert-hash sha256:d4517be6c40e40e1bbc749b24b35c0a7f68c0f75c1380c32b24d1ccb42e0decc
Enter the following command to configure using kubectl to access the cluster:
[zjin@master ~]$ sudo mkdir -p $HOME/.kube [zjin@master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [zjin@master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
After kubectl is configured, we can use kubectl to view cluster related information:
[zjin@master ~]$ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [zjin@master ~]$ kubectl get csr NAME AGE REQUESTOR CONDITION csr-nff2l 6m system:node:master Approved,Issued
If you encounter errors during cluster installation, you can use the following command to reset:
$ kubeadm reset $ ifconfig cni0 down && ip link delete cni0 $ ifconfig flannel.1 down && ip link delete flannel.1 $ rm -rf /var/lib/cni/
5, Install pod network
Here we install the flannel network plug-in.
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Note that the inage version number of the file content should be changed to v0.10.0
[zjin@master ~]$ kubectl apply -f kube-flannel.yml podsecuritypolicy.policy "psp.flannel.unprivileged" created clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.apps "kube-flannel-ds-amd64" created daemonset.apps "kube-flannel-ds-arm64" created daemonset.apps "kube-flannel-ds-arm" created daemonset.apps "kube-flannel-ds-ppc64le" created daemonset.apps "kube-flannel-ds-s390x" created
After the installation, we can use the kubectl get pods command to view the running status of the components in the cluster:
[zjin@master ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 0 40s kube-system kube-apiserver-master 1/1 Running 0 40s kube-system kube-controller-manager-master 1/1 Running 0 40s kube-system kube-dns-86f4d74b45-4vbx5 3/3 Running 0 12m kube-system kube-flannel-ds-amd64-wskq5 1/1 Running 0 52s kube-system kube-proxy-7dk2l 1/1 Running 0 12m kube-system kube-scheduler-master 1/1 Running 0 40s
As you can see, all are Running.
6, Add node
Install docker, kubeadm, kubelet, kubectl with the same version number on node01(167), and then execute the following command:
[zjin@node01 ~]$ sudo kubeadm join 10.3.4.166:6443 --token ebimj5.91xj7atpxbke4x yz --discovery-token-ca-cert-hash sha256:1eda2afcd5711343714ec2d2b6c6ea73ec06737 ee350b229d5b2eebfd82fb58a --ignore-preflight-errors=Swap
If an error is reported:
[preflight] Some fatal errors occurred: [ERROR CRI]: unable to check if the container runtime at "/var/run/docke rshim.sock" is running: fork/exec /bin/crictl -r /var/run/dockershim.sock info: no such file or directory
This is an error caused by the CRI tools version. You can uninstall CRI tools to solve this problem.
yum remove cri-tools
Then execute the command to join the node:
[zjin@node01 ~]$ sudo kubeadm join 10.3.4.166:6443 --token ebimj5.91xj7atpxbke4x yz --discovery-token-ca-cert-hash sha256:1eda2afcd5711343714ec2d2b6c6ea73ec06737 ee350b229d5b2eebfd82fb58a --ignore-preflight-errors=Swap [preflight] Running pre-flight checks. [WARNING SystemVerification]: docker version is greater than the most re cently validated version. Docker version: 18.03.0-ce. Max validated version: 17. 03 [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "10.3.4.166:6443" [discovery] Created cluster-info discovery client, requesting info from "https:/ /10.3.4.166:6443" [discovery] Requesting info from "https://10.3.4.166:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate va lidates against pinned roots, will use API Server "10.3.4.166:6443" [discovery] Successfully established connection with API Server "10.3.4.166:6443 " This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
Then copy the ~ /. kube/config file of the master node to the corresponding location of the current node to use the kubectl command line tool.
[zjin@master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 47m v1.10.0 node01 Ready <none> 3m v1.10.0
As you can see, node01 has also joined the cluster.