1.Introduction to Kubernetes
1.1 Introduction
The Kubernetes project originates from Borg, which is arguably the essence of Borg's design ideas and draws on the experience and lessons learned from the Borg system.
While Docker is growing rapidly as an advanced container engine, container technology has been used for many years within Google, and Borg systems run and manage thousands of container applications.
Kubernetes takes a higher level of abstraction of computing resources and delivers the final application service to the user through a careful combination of containers.
1.2 Advantages
-
Hide resource management and error handling, users only need to focus on Application development.
-
The service is highly available and reliable.
-
Loads can be run in clusters of thousands of machines.
1.3 Design Architecture
The Kubernetes cluster contains node agent kubelet and Master components (APIs, scheduler, etc), all based on a distributed storage system.
Core Components
etcd | The state of the entire cluster is saved |
apiserver | Provides a unique entry point for resource operations and provides mechanisms for authentication, authorization, access control, API registration and discovery |
controller manager | Responsible for maintaining the state of the cluster, such as fault detection, automatic expansion, rolling updates, etc. |
scheduler | Responsible for resource scheduling, dispatch Pod to the appropriate machine according to the scheduled scheduling policy |
kubelet | Responsible for maintaining the life cycle of containers as well as managing Volume (CVI) and Network (CNI) |
Container runtime | Responsible for image management and real operation of Pod and Containers (CRI) |
kube-proxy | Responsible for providing Service discovery and load balancing within cluster s for Services |
In addition to the core components, there are some recommended Add-ons:
kube-dns: responsible for providing DNS services to the entire cluster
Ingress Controller: Provide access to external networks for services
Heapster: Provide resource monitoring
Dashboard: Provide GUI
Federation: Provides clusters across availability zones
Fluentd-elasticsearch: Provides cluster log collection, storage, and query
Hierarchical structure
2.Kubernetes deployment
2.1 Deploying the experimental environment
- Four virtual machines are required, habor warehouse is laid out in server1, and the rest as Kubernetes nodes need to have more than 1800MB of memory and no less than 2 CPU s
server1 | Private warehouse | 172.254.77.1 |
server2 | Master Node | 172.25.77.2 |
server3 | node | 172.25.77.3 |
server4 | node | 172.25.77.4 |
- Because all subsequent node deployments are the same, set the confidentiality in advance for ease of operation
[root@server2 yum.repos.d]# ssh-keygen [root@server2 yum.repos.d]# ssh-copy-id server3 [root@server2 yum.repos.d]# ssh-copy-id server3
- Turn off selinux and iptables firewalls for nodes
- Open the private repository on server1
[root@server1 ~]# cd harbor/ [root@server1 harbor]# ls [root@server1 harbor]# docker-compose start
2.2 Deploy docker engine on all nodes
Configuration source file:
root@server2 ~]# cd /etc/yum.repos.d/ [root@server2 yum.repos.d]# ls [root@server2 yum.repos.d]# scp docker.repo server3:/etc/yum.repos.d/ /Source file copied to node [root@server2 yum.repos.d]# scp docker.repo server4:/etc/yum.repos.d/
Install docker in all nodes (server2, server3, server4) and set boot-up self
[root@server2 yum.repos.d]# yum install -y docker-ce [root@server2 yum.repos.d]# systemctl enable --now docker
Configure daemon.json
[root@server2 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.westos.org"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
[root@server2 ~]# systemctl restart docker
Remaining node configurations:
[root@server2 docker]# scp daemon.json server3:/etc/docker/ [root@server2 docker]# scp daemon.json server4:/etc/docker/
server4: [root@server4 ~]# systemctl restart docker
server3: [root@server3 ~]# systemctl restart docker
[root@server2 ~]# docker info
Solve warning issues:
All nodes do the following:
[root@server2 yum.repos.d]# cd /etc/sysctl.d [root@server2 sysctl.d]# vim docker.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 [root@server2 sysctl.d]# sysctl --system
Write resolution for warehouse at all nodes
Configure Certificate
Send previously generated certificates to srver2 (master node)
[root@server1 mnt]# cd /etc/docker [root@server1 docker]# ls [root@server1 docker]# scp -r certs.d/ server2:/etc/docker/
[root@server2 ~]# cd /etc/docker [root@server2 docker]# ls [root@server2 docker]# scp -r certs.d/ server3:/etc/docker/ [root@server2 docker]# scp -r certs.d/ server4:/etc/docker/
Verify that all nodes access the harbor private repository
[root@server2 docker]# docker pull busybox
[root@server3 ~]# docker pull busybox
[root@server4 ~]# docker pull busybox
Before you pull the mirror, make sure it is in your private repository
2.3 All nodes prohibit swap partitioning
[root@server2 docker]# swapoff -a
[root@server2 docker]# Vim/etc/fstab //Comment out swap definitions in/etc/fstab files /dev/mapper/rhel-root / xfs defaults 0 0 UUID=d2b4bb78-5138-4096-afcf-ea6ae3526b71 /boot xfs defaults 0 0 #/dev/mapper/rhel-swap swap swap defaults 0 0
The steps in server3 and server4 are the same as those in server2 above, so they are omitted
2.4 Install deployment software kubeadm
Install software
All node configuration source files are installed from Ali Cloud:
[root@server2 docker]# cd /etc/yum.repos.d/ [root@server2 yum.repos.d]# vim k8s.repo [root@server2 yum.repos.d]# cat k8s.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 [root@server2 yum.repos.d]# scp k8s.repo server3:/etc/yum.repos.d/ [root@server2 yum.repos.d]# scp k8s.repo server4:/etc/yum.repos.d/
Host address forwarding enables virtual machines to connect to external networks
[root@localhost Desktop]# iptables -t nat -I POSTROUTING -s 172.25.77.0/24 -j MASQUERADE
All nodes install and set boot i self-start:
yum install -y kubelet kubeadm kubectl systemctl enable --now kubelet
Master node (server2):
[root@server2 yum.repos.d]# kubeadm config print init-defaults //View default configuration information [root@server2 yum.repos.d]# Kubeadm config images list--image-repository //pull mirror registry.aliyuncs.com/google_containers [root@server2 yum.repos.d]# kubeadm config images pull --image-repository //initialize cluster registry.aliyuncs.com/google_containers
Pull process may take a while, wait patiently
Upload required mirrors to a private repository
-
Create a k8s project in harbor
-
Filter out source addresses
[root@server2 yum.repos.d]# docker images |grep registry.aliyuncs.com /
-
Filter out mirror names
[root@server2 yum.repos.d]# docker images |grep registry.aliyuncs.com |awk '{print $1":"$2}'| awk -F/ '{print $3}' /
- Modify Mirror Label
[root@server2 yum.repos.d]# Docker images | grep registry.aliyuncs.com | awk'{print $1': $2}'| awk-F/'{print $3}' | awk'{system ('docker tag registry.aliyuncs.com/google_containers/'$1'reg.westos.org/k8s/'$1')}'/Modify tag [root@server2 yum.repos.d]# docker images |grep reg.westos.org/Filter View
- Upload mirror to private repository
[root@server2 yum.repos.d]# docker login reg.westos.org login warehouse [root@server2 yum.repos.d]# Docker images | grep reg.westos.org | awk'{system ('docker push'$1':'$2')}'/ upload mirror
- Verify: pull again very fast
[root@server2 yum.repos.d]# kubeadm config images list --image-repository reg.westos.org/k8s //list mirrors [root@server2 yum.repos.d]# Kubeadm config images pull--image-repository reg.westos.org/k8s //pull mirror to local very fast
Initialize Cluster
[root@server2 yum.repos.d]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.westos.org/k8s
--pod-network-cidr=10.244.0.0/16 //Must be added when using flannel network components --kubernetes-version //Specify k8s installation version
After initialization, you get some tips:
[root@server2Yum.repos.d]# export KUBECONFIG=/etc/kubernetes/admin.conf executes as prompted
Looking at Master status, we found that master was not properly enabled due to the lack of related network components
[root@server2 yum.repos.d]# kubectl get node [root@server2 yum.repos.d]# kubectl get pod -n kube-system
2.5 Install flannel network components
[root@server2 ~]# docker pull quay.io/coreos/fannel:v0.14.0/pull fannel:v0.14.0 [root@server2 ~]# Docker tag quay.io/coreos/flannel:v0.14.0 reg.westos.org/k8s/flannel:v0.14.0/Change label [root@server2 ~]# docker push reg.westos.org/k8s/flannel:v0.14.0/Upload to repository [root@server2 ~ ]# Yum install wget-y / install download tool [root@server2 ~ ]# WGethttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml /Download Profile [root@server2 ~]# vim kube-flannel.yml/Modify configuration file 169 image: k8s/flannel:v0.14.0 183 image: k8s/flannel:v0.14.0 [root@server2 ~]# Kubectl apply-f kube-flannel.yml/Application
View Status: master (master Node) is found to be functioning properly at this time
[root@server2 ~]# kubectl get pod -n kube-system [root@server2 ~]# kubectl get node
2.6 Node Expansion
Enter expansion commands in server3 and server4 as prompted by the previous cluster initialization:
kubeadm join 172.25.77.2:6443 --token ye5kwg.bqgzkttd7xjji7kd --discovery-token-ca-cert-hash sha256:d36073e942d364ddd37b08d5d9d77e60794b4e196255de6484749d2d1226f382
View node status in server1 as prompted:
[root@server2 ~]# kubectl get nodes
[root@server2 ~]# kubectl get pod -n kube-system
Following: Expansion succeeded
2.8 Configure kubectl command completion
[root@server2 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc [root@server2 ~]# source ~/.bashrc