1, K8s quick start
1. Introduction
Kubernetes is abbreviated as k8s. It is an open source platform for automatic deployment, portability and scalability. It is used to manage containerized workloads and services, and promote declarative configuration and automation. Kubernetes has a large and rapidly growing ecosystem. Kubernetes' services, support, and tools are widely available.
Chinese official website: https://kubernetes.io/Zh/
Chinese community: https://www.kubernetes.org.cn/
Official documents: https://kubernetes.io/zh/docs/home/
Community documents: https://docs.kubernetes.org.cn/
Evolution of deployment methods:
For more information, please visit the official documentation: https://kubernetes.io/zh/docs/concepts/overview/what-is-kubernetes/
2. Architecture
file: https://kubernetes.io/zh/docs/concepts/overview/components/
1) master node architecture
-
kube-apiserver
- The api interface exposed k8s to the outside world is the only entrance for external resource operations
- Provide authentication, authorization, access control, API registration and discovery mechanisms
-
etcd
- etcd is a key value database with consistency and high availability. It can be used as a background database to store all cluster data of Kubernetes.
- The etcd database of Kubernetes cluster usually needs a backup plan.
-
kube-scheduler
- The component on the master node is responsible for monitoring the newly created Pods without a specified running node, and selecting the node to let the Pod run on it.
- All k8s cluster operations must be scheduled through the master node
-
Kube Controller Manager: the component that runs the controller process on the master node
These controllers include:
- Node Controller: it is responsible for notifying and responding to node failures
- Job controller: monitor job objects representing one-time tasks, and then create Pods to run these tasks until they are completed
- Endpoints Controller: populate endpoints objects (i.e. add Service and Pod)
- Service account & token controllers: create a default account and API access token for the new namespace
2) Node architecture
-
kubelet
- An agent running on each node in the cluster. It ensures that containers are running in the Pod. kubelet receives a set of PodSpecs provided to it through various mechanisms to ensure that the containers described in these PodSpecs are running and healthy. kubelet does not manage containers that were not created by Kubernetes.
- Responsible for maintaining the life cycle of the window, as well as the management of Volume (CSI) and network (CIN)
-
kube-proxy
- Kube proxy is a network proxy running on each node in the cluster, which implements part of the concept of Kubernetes Service.
- Kube proxy maintains network rules on nodes. These network rules allow network communication with the Pod from network sessions inside or outside the cluster.
-
Container Runtime
- The container running environment is the software responsible for running the container.
- Kubernetes supports multiple container running environments: Docker, containerd, CRI-O, and any implementation of Kubernetes CRI (container running environment interface).
-
fluentd
- Is a daemon that helps provide cluster level logging
3. Concept
-
Container: container, which can be a container started by docker
-
Pod:
- k8s uses Pod to organize a set of containers
- All containers in a Pod share the same network.
- Pod is the smallest deployment unit in k8s
-
Volume:
- Declare the file directory accessible in the Pod container
- It can be mounted under the specified path of one or more containers in the Pod
- Support a variety of backend storage abstractions (local storage, distributed storage, cloud storage)
-
Controllers: higher level objects, deploy and manage Pod;
- ReplicaSet: ensure the expected number of Pod replicas
- Deployment: stateless application deployment
- Stateful set: stateful application deployment, such as Mysql
- Daemon set: ensure that all nodes run a specified Pod
- Job: one time task
- Cronjob: scheduled task
-
Deployment:
- . define the number of copies, versions, etc. of a set of pods
- Maintain the number of pods through the controller (automatic reply to failed pods)
- The controller controls the version with the specified policy (rolling upgrade, rollback, etc.)
-
Service
- Define access policies for a set of pods
- Pod load balancing provides stable access addresses for one or more pods
- Support multiple methods (ClusterlP, NodePort, LoadBalance)
-
Label: label, used to query and filter object resources
-
Namespace: namespace, logical isolation
- A logical isolation mechanism within the cluster (authentication. Resources)
- Each resource belongs to a namespace
- All resource names in the same namespace cannot be duplicate
- Different namespace s can duplicate resource names
-
API: we operate the whole cluster through kubernetes API.
- You can finally send http+json/yaml requests to API Server through kubectl, ui and curl, and then control k8s cluster. All resource objects in k8s can be defined or described in yaml or JSON format
2, K8s cluster installation
1. Pre requirements
- Prepare one or more machines, operating system Centos7.x-86_x64. My configuration is 3 Centos7
- Hardware configuration: 2GB or more RAM, 2 CPUs or more CPUs, hard disk 30GB or more. My configuration is 2-core 2G 64G
- Network interworking between all machines in the cluster
- You can access the Internet. You need to pull the image
2. Configure Linux environment (all nodes execute)
-
Turn off firewall
systemctl stop firewalld systemctl disable firewalld
-
Close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
-
Close swap
swapoff -a #Temporarily Closed sed -ri 's/.*swap.*/#&/' /etc/fstab #Permanent shutdown free -g #Verify that swap must be 0
-
Add the corresponding relationship between host name and IP:
View host name:
hostname
If the hostname is incorrect, you can specify a new hostname through the hostnamectl set hostname < newhostname > command. My three node names: k8s-node1, k8s-node2, k8s-node3
Configure / etc/hosts on the three machines so that they can access each other through hostname
vim /etc/hosts 10.211.55.19 k8s-node1 # The IP here should be changed to the IP of the node corresponding to hostname 10.211.55.21 k8s-node2 10.211.55.20 k8s-node3
-
The chain that transfers bridged IPV4 traffic to iptables:
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
Apply rules:
sysctl --system
-
date view time (optional)
yum -y install ntpupdate ntpupdate time.window.com #Sync latest time
3. Install docker, kubedm, kubelet and kubectl on all nodes
By default, the CRI (container runtime) of Kubenetes is Docker, so Docker is installed first.
1) Install Docker
-
Uninstall previous docker
$ sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
-
Installing Docker -CE
$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2 # Set the yum location of docker repo $ sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ sudo yum -y install docker-ce docker-ce-cli containerd.io
-
Configure mirror acceleration
After registry mirrors, you can use your own image acceleration address, which can be obtained from alicloud image acceleratorsudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "registry-mirrors": ["https://r1fl0qlt.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker
-
Start Docker and set Docker startup
sudo systemctl enable docker
2) Add Ali and Yum sources
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
3) Install kubedm, kubelet, and kubectl
install
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
Power on
systemctl enable kubelet && systemctl start kubelet
To view the status of kubelet:
systemctl status kubelet
View kubelet version:
> kubelet --version Kubernetes v1.17.3
4) Deploy k8s master
(1) master node initialization
Initialize kubedm, and the following command will download k8s component images:
$ kubeadm init \ --apiserver-advertise-address=10.211.55.19 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16
Parameter Description:
- --Apiserver advertisement address: the IP address here should be changed to the address of the master host. I use the first k8s-node1 of the three machines;
- --Kubernetes version: configure the version installed by yourself, which is consistent with the version of kubedm in the previous step. kubelet --version view the version
Note: if an error is reported when executing the above initialization command, execute kubedm reset - f before executing the initialization command next time
Execution results:
If you see the following page, the initialization is successful
After initializing the node, execute these three commands according to the prompts in the log:
# This directory will save the connection configuration mkdir -p $HOME/.kube # Save the authentication file to this directory sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
(2) Installing flannel
Install CNI network plug-in, and install flannel here
Create a directory of flannel,
-
Download yml file:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
If the above website can't be downloaded, go Code cloud flannel Copy or download this yml file from the
-
Create flannel
kubectl create -f kube-flannel.yml
-
See if it runs successfully
You can see that Kube flannel is running:[root@k8s-node1 flannel]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7d89d9b6b8-bjmsp 1/1 Running 0 36m kube-system coredns-7d89d9b6b8-z46zp 1/1 Running 0 36m kube-system etcd-k8s-node1 1/1 Running 1 37m kube-system kube-apiserver-k8s-node1 1/1 Running 1 37m kube-system kube-controller-manager-k8s-node1 1/1 Running 1 37m kube-system kube-flannel-ds-srf9w 1/1 Running 0 65s kube-system kube-proxy-n64fp 1/1 Running 0 36m kube-system kube-scheduler-k8s-node1 1/1 Running 1 37m
5) Add slave node
-
View all nodes (Master running)
#Get all nodes $ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready control-plane,master 40m v1.22.2
-
Get Node join command
$ kubeadm token create --print-join-command kubeadm join 10.37.132.5:6443 --token yzd8bg.mzdp8ua5cyuggpdp --discovery-token-ca-cert-hash sha256:71216e7189e48bfedac9fce79c9dd6920b3354ff62023e7ea069848fa84a3861
-
Run the output command from the node
$ kubeadm join 10.37.132.5:6443 --token yzd8bg.mzdp8ua5cyuggpdp --discovery-token-ca-cert-hash sha256:71216e7189e48bfedac9fce79c9dd6920b3354ff62023e7ea069848fa84a3861
-
View node status
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready control-plane,master 68m v1.22.2 k8s-node2 Ready <none> 4m16s v1.22.2 k8s-node3 Ready <none> 4m12s v1.22.2
$ journalctl -u kubelet #View kubelet log
Monitor pod progress
watch kubectl get pod -n kube-system -o wide
After all statuses change to running status, view the node information again:
[root@k8s-node1 ~]# kubectl get nodes; NAME STATUS ROLES AGE VERSIONk8s-node1 Ready master 3h50m v1.17.3k8s-node2 Ready <none> 3h3m v1.17.3k8s-node3 Ready <none> 3h3m v1.17.3[root@k8s-node1 ~]#
3, Getting started kubernetes clusters
1. k8s deploy a tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
Other commands:
# Get all resources $ kubectl get all # Get details of all resources $ kubectl get all -o wide # Get pods information kubectl get pods
2. Expose the port outward
Execute on master
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
- --port=80: port 80 mapping in pod; server will bring 80% of pod
- --Target port = 8080: Port 8080 in the container will be mapped to 80 in the pod
- --type=NodePort: a type of Service that binds a port on each machine for the Service, so that the Service can be accessed through NodeIp:NodePort
View the service and access k8s-node1:30916 (30916 is automatically assigned by the service, port 30916 of the server, and port 80 in the mapped pod) to access tomcat:
# View service information $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 91m tomcat6 NodePort 10.96.96.4 <none> 80:30916/TCP 12s
3. Dynamic capacity expansion test
# When viewing the deployment, there is only one copy $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat6 1/1 1 1 8m44s # Expand to 3 copies $ kubectl scale --replicas=3 deployment tomcat6 deployment.apps/tomcat6 scaled # You can see that the capacity is being expanded $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat6 2/3 3 2 9m35s
4. Delete
$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat6 3/3 3 3 12m $ kubectl delete deployment tomcat6 deployment.apps "tomcat6" deleted $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 91m tomcat6 NodePort 10.96.96.4 <none> 80:30916/TCP 12s $ kubectl delete service tomcat6 service "tomcat6" deleted
5. Format output yaml
# Print the yaml corresponding to the command used to deploy Tomcat earlier $ kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run='client' -o=yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: replicas: 1 selector: matchLabels: app: tomcat6 strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat resources: {} status: {}
- --Dry run: the value must be none (default), server or client. If it is a client policy, only the sending object is printed, but it is not sent. If the server policy, submit server-side requests without persisting resources.
- -o: The output format can be json, yaml, wide, etc.
6. Deploy tomcat using yaml
-
Use the yaml printed in the previous step, delete some useless configurations, change replicas to 3, and deploy 3 Tomcat s using the yaml file
apiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat
-
Save the above yaml to a file tomcat6.yaml
$ kubectl apply -f tomcat6.yaml
- apply: means to create or update resources. delete is used for deletion
- -f: Specify yaml file
View yaml of pod:
# View pod $ kubectl get pods NAME READY STATUS RESTARTS AGE tomcat6-56fcc999cb-b4nt7 1/1 Running 0 2m49s tomcat6-56fcc999cb-pncn9 1/1 Running 0 2m49s tomcat6-56fcc999cb-wczpj 1/1 Running 0 2m49s # View yaml of pod $ kubectl get pods tomcat6-56fcc999cb-b4nt7 -o yaml
-
Delete Tomcat created with yaml
$ kubectl delete -f tomcat6.yaml
7. Some official documents
1) kubectl operation document
https://kubernetes.io/zh/docs/reference/kubectl/overview/
2) Resource type
https://kubernetes.io/zh/docs/reference/kubectl/overview/#%e8%b5%84%e6%ba%90%e7%b1%bb%e5%9e%8b