Introduction to Kubernetes
Kubernetes (K8S for short, 8 letters between K and S) is an open source system for automatically deploying, expanding and managing container applications. It combines the containers that make up the application into logical units to facilitate management and service discovery. Kubernetes comes from Google'S 15 years of operation and maintenance experience in production environment, and condenses the best ideas and practices of the community.
Kubernetes has the following features:
- Service discovery and load balancing: you can use unfamiliar service discovery mechanisms without modifying your application.
- Storage Orchestration: automatically mount the selected storage system, including local storage.
- Secret and configuration management: when deploying and updating the configuration of Secrets and applications, it is not necessary to rebuild the container image and expose the secret information in the software stack configuration.
- Batch execution: in addition to services, Kubernetes can also manage your batch and CI workloads and replace expired containers when desired.
- Horizontal scaling: scale the application automatically using a simple command, a UI, or based on CPU usage.
- Automated go live and rollback: Kubernetes will go online step by step for changes to the application or its configuration, and monitor the health of the application to ensure that you will not terminate all instances at the same time.
- Automatic bin packing: automatically place containers according to resource requirements and other constraints, while avoiding affecting availability.
- Self repair: restart the failed container, replace and reschedule the container when the node dies, and kill the container that does not respond to the user-defined health check.
Introduction to Minikube
Minikube is a lightweight implementation of Kubernetes, which can create VM S on local computers and deploy simple clusters with only one node. Minikube can be used for Linux, MacOS and Windows systems. The Minikube CLI provides a variety of operations to guide the cluster, including start, stop, view status, and delete.
Kubernetes core concepts
Since Kubernetes has many core concepts, learning them is very helpful to understand the use of Kubernetes, so let's learn these core concepts first.
Node
Kubernetes cluster means that kubernetes coordinates a cluster of highly available computers, and each computer is connected to each other as an independent unit.
A Kubernetes cluster contains two types of resources:
- Master: responsible for managing the entire cluster. Coordinate all activities in the cluster, such as scheduling applications, maintaining the required status of applications, application expansion, and launching new updates.
- Node: used to host running applications. It can be a virtual machine or a physical machine. It acts as a working machine in the Kubernetes cluster. Each node has a Kubelet. It manages the node and is the agent for the node to communicate with the Master. The node also has tools for handling container operations, such as Docker or rkt.
image.png
Deployment
Deployment is responsible for creating and updating instances of the application. After the deployment is created, the Kubernetes Master schedules the application instance to each node in the cluster. If the node hosting the instance is closed or deleted, the deployment controller replaces the instance with an instance on another node in the cluster. This provides a self-healing mechanism to solve the problem of machine fault maintenance.
You can use Kubernetes command line interface kubectl to create and manage deployments. Kubectl interacts with the cluster using the Kubernetes API.
image.png
Pod
Pod is equivalent to the concept of logical host and is responsible for hosting application instances. It includes one or more application containers (such as Docker) and some shared resources (shared storage, network, operation information, etc.) of these containers.
image.png
Service
Service is an abstraction layer that defines a logical set of pods and supports external traffic exposure, load balancing and service discovery for these pods.
Although each Pod has a unique IP address, these IPS will not be exposed outside the cluster without service. Service allows your application to receive traffic. Service can also be exposed by marking type in ServiceSpec. The type type is as follows:
- ClusterIP (default): exposes services on the internal IP of the cluster. This type makes the Service accessible only from within the cluster.
- NodePort: use NAT to expose the Service on the same port of each selected Node in the cluster. Use < nodeip >: < NodePort > to access the Service from outside the cluster. Is a superset of ClusterIP.
- LoadBalancer: create an external load balancer (if supported) in the current cloud and assign a fixed external IP to the Service. Is a superset of NodePort.
- Externalname: expose the Service with any name (specified by externalname in the spec) by returning the CNAME record with this name. Do not use proxy.
image.png
Docker installation
Since the operation of Kubernetes depends on the container runtime (the software responsible for running the container), the more common container runtime includes Docker, containerd and CRI-O. Select Docker here. First install the Docker environment on the Linux server.
- Install Yum utils:
yum install -y yum-utils device-mapper-persistent-data lvm2
- Add docker warehouse location for yum source:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- To install Docker:
yum install docker-ce
- Start Docker:
systemctl start docker
Minikube installation
- First, we need to download the binary installation package of Minikube and install:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube
- Then start Minikube with the following command:
minikube start
- If you use root, you will not be able to start and the following message will be prompted. That is because Minikube is not allowed to start with root permission. You need to create a non root account and start again;
* minikube v1.16.0 on Centos 7.6.1810 * Automatically selected the docker driver * The "docker" driver should not be used with root privileges. * If you are running minikube within a VM, consider using --driver=none: * https://minikube.sigs.k8s.io/docs/reference/drivers/none/ X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
- Here, a macro user belonging to the docker user group is created and switched to this user;
# Create user useradd -u 1024 -g docker macro # Set user password passwd macro # Switch users su macro
- Start Minikube with minikube start command again. After successful startup, the following information will be displayed:
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ * Preparing Kubernetes v1.20.0 on Docker 20.10.0 ... - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Verifying Kubernetes components... * Enabled addons: default-storageclass, storage-provisioner * kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Use of Kubernetes
Create cluster
Through Minikube, we can create a single Node K8S cluster. The cluster management Master and the Node responsible for running applications are deployed on this Node.
- View the version number of Minikube:
minikube version
minikube version: v1.16.0 commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1
- Check the version number of kubectl. Kubectl will be installed directly for the first time:
minikube kubectl version Copy code
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:51:19Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
- If you want to use the kubectl command directly, you can copy it to the / bin directory:
# Find the location of the kubectl command find / -name kubectl # After finding it, copy it to the / bin directory cp /mydata/docker/volumes/minikube/_data/lib/minikube/binaries/v1.20.0/kubectl /bin/ # Use the kubectl command directly kubectl version
- View cluster details:
kubectl cluster-info
Kubernetes control plane is running at https://192.168.49.2:8443 KubeDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
- Looking at all nodes in the cluster, you can find that Minikube has created a single Node simple cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION minikube Ready control-plane,master 46m v1.20.0
Deploy application
Once you run the K8S cluster, you can deploy containerized applications on it. By creating a Deployment object, you can command K8S how to create and update an instance of the application.
- Specify an application image and create a Deployment. Here, create an Nginx application:
kubectl create deployment kubernetes-nginx --image=nginx:1.10
-
When creating a Deployment, K8S will generate the following operations:
- Select an appropriate Node to deploy the application;
- Deploy the application to the Node;
- Redeploy the application when the application closes abnormally or is deleted.
-
View all deployments:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-nginx 1/1 1 1 21h
- We can create a proxy through the kubectl proxy command, so that we can directly access the K8S API through the exposed interface. Here, we call the interface to query the K8S version;
[macro@linux-local root]$ kubectl proxy Starting to serve on 127.0.0.1:8001 [root@linux-local ~]# curl http://localhost:8001/version { "major": "1", "minor": "20", "gitVersion": "v1.20.0", "gitCommit": "af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", "gitTreeState": "clean", "buildDate": "2020-12-08T17:51:19Z", "goVersion": "go1.15.5", "compiler": "gc", "platform": "linux/amd64" }
View application
By operating the Pod running the application, you can view the container log or execute commands inside the container.
- View the status of all pods in K8s:
kubectl get pods
NAME READY STATUS RESTARTS AGE kubernetes-nginx-78bcc44665-8fnnn 1/1 Running 1 21h
- View the detailed status of the Pod, including IP address, occupied port, use image and other information;
kubectl describe pods
Name: kubernetes-nginx-78bcc44665-8fnnn Namespace: default Priority: 0 Node: minikube/192.168.49.2 Start Time: Tue, 05 Jan 2021 13:57:46 +0800 Labels: app=kubernetes-nginx pod-template-hash=78bcc44665 version=v1 Annotations: <none> Status: Running IP: 172.17.0.7 IPs: IP: 172.17.0.7 Controlled By: ReplicaSet/kubernetes-nginx-78bcc44665 Containers: nginx: Container ID: docker://31eb1277e507ec4cf8a27b66a9f4f30fb919d17f4cd914c09eb4cfe8322504b2 Image: nginx:1.10 Image ID: docker-pullable://nginx@sha256:6202beb06ea61f44179e02ca965e8e13b961d12640101fca213efbfd145d7575 Port: <none> Host Port: <none> State: Running Started: Wed, 06 Jan 2021 09:22:40 +0800 Last State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 05 Jan 2021 14:24:55 +0800 Finished: Tue, 05 Jan 2021 17:32:48 +0800 Ready: True Restart Count: 1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-dhr4b (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-dhr4b: Type: Secret (a volume populated by a Secret) SecretName: default-token-dhr4b Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none> Copy code
- Set the name of the pod as an environment variable to make it easier to use $pod later_ Name to apply the name of the Pod:
export POD_NAME=kubernetes-nginx-78bcc44665-8fnnn
- To view the log printed by Pod:
kubectl logs $POD_NAME
- Use exec to execute commands in the Pod container. Here, use env command to view environment variables:
kubectl exec $POD_NAME -- env Copy code
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=kubernetes-nginx-78bcc44665-8fnnn KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 NGINX_VERSION=1.10.3-1~jessie HOME=/root
- Enter the container and execute the bash command. If you want to exit the container, you can use the exit command:
kubectl exec -ti $POD_NAME -- bash
Public exposure application
The default Pod cannot be accessed externally by the cluster. You need to create a Service and expose the port before it can be accessed externally.
- Create a Service to expose the kubernetes nginx Deployment:
kubectl expose deployment/kubernetes-nginx --type="NodePort" --port 80
- View the status of all services in K8S:
kubectl get services Copy code
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h16m kubernetes-nginx NodePort 10.105.177.114 <none> 80:31891/TCP 5s
- View the details of the Service. You can get the ports exposed to the outside through the NodePort attribute;
kubectl describe services/kubernetes-nginx
Name: kubernetes-nginx Namespace: default Labels: app=kubernetes-nginx Annotations: <none> Selector: app=kubernetes-nginx Type: NodePort IP Families: <none> IP: 10.106.227.54 IPs: 10.106.227.54 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30158/TCP Endpoints: 172.17.0.7:80 Session Affinity: None External Traffic Policy: Cluster Events: <none>
- Through the CTRL command, you can access the Nginx service through the Minikube IP:NodePort IP. At this time, the Nginx home page information will be printed;
curl $(minikube ip):30158
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Use of labels
By adding labels to resources, you can easily manage resources (such as Deployment, Pod, Service, etc.).
- View the labels contained in the Deployment;
kubectl describe deployment
Name: kubernetes-nginx Namespace: default CreationTimestamp: Tue, 05 Jan 2021 13:57:46 +0800 Labels: app=kubernetes-nginx Annotations: deployment.kubernetes.io/revision: 1 Selector: app=kubernetes-nginx Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge
- Query Pod through Label:
kubectl get pods -l app=kubernetes-nginx
NAME READY STATUS RESTARTS AGE kubernetes-nginx-78bcc44665-8fnnn 1/1 Running 1 21h
- Query Service through Label:
kubectl get services -l app=kubernetes-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-nginx NodePort 10.106.227.54 <none> 80:30158/TCP 4m44s
- Add Label to Pod:
kubectl label pod $POD_NAME version=v1
- To view the details of Pod, you can view the Label information:
kubectl describe pods $POD_NAME
Name: kubernetes-nginx-78bcc44665-8fnnn Namespace: default Priority: 0 Node: minikube/192.168.49.2 Start Time: Tue, 05 Jan 2021 13:57:46 +0800 Labels: app=kubernetes-nginx pod-template-hash=78bcc44665 version=v1
- Query Pod through Label:
kubectl get pods -l version=v1
- To delete a service through Label:
kubectl delete service -l app=kubernetes-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30h
Visual management
Dashboard is a web-based K8S user interface. You can use dashboard to deploy container applications to K8S clusters, troubleshoot container applications, and manage cluster resources.
- View the Minikube built-in plug-in. By default, the Dashboard plug-in is not enabled:
minikube addons list
|-----------------------------|----------|--------------| | ADDON NAME | PROFILE | STATUS | |-----------------------------|----------|--------------| | dashboard | minikube | disabled | | default-storageclass | minikube | enabled ✅ | |-----------------------------|----------|--------------|
- To enable the Dashboard plug-in:
minikube addons enable dashboard
- When you open the Dashboard, the management page will not be opened through the -- url parameter, and you can get the access path on the console:
minikube dashboard --url
* Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... http://127.0.0.1:44469/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
- To access the Dashboard from the outside, you need to use kubectl to set the proxy, - address as your server address;
kubectl proxy --port=44469 --address='192.168.5.94' --accept-hosts='^.*' &
- To access the server from the outside, you need to open the firewall port;
# Switch to root su - # Open port firewall-cmd --zone=public --add-port=44469/tcp --permanent # service iptables restart firewall-cmd --reload
- You can access the Dashboard at the following address:
http://192.168.5.94:44469/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
- View the resource status information in the K8S cluster:
image.png
- Create K8S resources through yaml script:
image.png
- View the status information of all pods in K8S. You can view the container log and execute internal commands through more buttons.
image.png
summary
When our application needs to be deployed on multiple physical machines, the traditional approach is to deploy one physical machine at a time. If we use K8S, we can consider these physical machines as a cluster. We only need to deploy applications to the cluster through K8S, and we don't need to care about the deployment details of physical machines. At the same time, K8S provides functions such as horizontal capacity expansion, automatic packing and automatic repair, which greatly reduces the workload of application cluster deployment.
Author: stay up late without overtime
Link: https://www.jianshu.com/p/8930396ee364
Source: Jianshu
The copyright belongs to the author. For commercial reprint, please contact the author for authorization, and for non-commercial reprint, please indicate the source.