k8s first experience

K8S features, let's introduce K8S (Resource Manager)

① Lightweight 
Some explanatory language: for example Python/JavaScript / Perl /Shell,Low efficiency and more memory resources
 use go Language compiled language. The language level supports process management without human control go The resource consumption of development is small
② Open Source
③ Self healing (controller control) pod,ensure pod We can maintain the expected number of replicas. 3) restart the failed container, replace and redeploy when the node fails to ensure the expected number of replicas; Kill the container that fails the health check, and will not process the client request until it is ready to ensure that the online service is not interrupted.
Restart or rebuild containers in abnormal status (re create and re delete) to ensure that the business line is not interrupted
④ Elastic expansion
 Manual elastic telescopic, when nginx Read, disk, memory pressure (single) pod)>80%,modify replicasets: 3->4 Update it nginx
Yml threshold cpu Utilization rate > 80% Trigger capacity expansion pod (CPU Upper limit of use, docker-cgroup k8s 1,limit  2,configmap-(profile)
Using commands UI Or based on CPU Automatically and quickly expand and shrink application instances according to usage to ensure high availability during peak concurrency of application services; Business reclaims resources at low peak and runs services at the lowest cost.
Scaling: expanding and shrinking (node application type) nginx)
Flexibility: as long as the rules are specified manually, when the conditions are met, the operation of capacity expansion or reduction will be triggered automatically
⑤ Automatic deployment and rollback
K8S Use the rolling update strategy to update the application, one at a time Pod,Instead of deleting all at once Pod,If there is a problem during the update process, the change will be rolled back to ensure that the upgrade will not affect the business.
⑥ Service discovery()And load balancing
K8S For multiple pod(Container) provides a unified access portal (internal) IP Address and one DNS Name), and all containers associated with load balancing, so that users do not need to consider containers IP Question.
use IPVS(Zhang wensong) framework "substitution" iptables
kube-proxy 3 Two modes:①userspace,②iptables,③ipvs
⑦ Confidentiality and configuration management( secret->security/Authentication (encrypted data)
Manage confidential data and application configuration without exposing sensitive data in the image, so as to improve the security of sensitive data. Some common configurations can be stored in the K8S It is convenient for applications to use.
⑧ Storage orchestration (static, dynamic)
Mount external storage systems, whether from local storage or public cloud (e.g AWS),Or networked storage (e.g NFS,GlusterFS,Ceph)Both are used as part of cluster resources, which greatly improves the flexibility of storage use.
⑨ Batch processing
 Provide one-time tasks( job),Timed task( crontab);Scenarios for batch data processing and analysis


K8S aims to make it easier and more efficient to deploy container applications and manage container cluster resources

Basic components

1. pod (smallest resource unit)
A pod will encapsulate multiple containers to form the running environment of a child node (minimum unit, number of containers 2 +)
Minimum deployment unit
A set of containers (basic container + main application container + hanging bucket / sub container)
Containers in a Pod share a network namespace (pause provided by the underlying container)
Pod is short (describing its life cycle)

1,pod Several containers in the middle run: 3( pod Medium base container+Operation and maintenance container+Main application)
2,pod How to communicate between multiple containers in the: localhost
3,identical node In node pod How to communicate between: docker 0
4,Different node node pod How to communicate between: with the help of cni Plug in, we use flannel

Direct pod communication between different nodes

First, define two node hosts A POD-A  host B  POD-B
① POD-A First sent to docker 0 bridge 
② docker 0 Will forward to flannel0 bridge  
PS: Detailed script docker 0(The bridge will be flannel 0 The bridge obtains the forwarding information in the form of hook function)
③ flannel 0 Will forward to flanneld (Background process),flanned Will arrive ETCD View in ETCD Maintained routing table entries/Message, confirm where to send it
ETCD-What are the routing tables flanneld Information you need to know
 Get pod-b Where node Node - host B
 Get to host A How to connect to the host B,From which physical network card (host)
④ flanned Will be forwarded to the physical network card of the host
⑤ The physical network card will UDP Forward packets by
(In the packet, except the host A And host B Source of IP And objectives IP In addition, it will POD-A and POD-B Source of/target IP Encapsulated in UDP (in agreement)

Host B received

① First, unpack and discover the source IP The address is to find your own (host) IP (address)
② UDP After forwarding and unpacking, it is found that the package is UDP Internal POD-IP(source/Target)
③ host B Your physical network card will be sent to flanned Process
④ and flanned Will query ETCD The routing table information maintained in is found to be its own pod(You need to confirm which one you're looking for docker bridge )
⑤ flanned Will send it to yourself flannel0 Bridge, flannel The bridge then sends it to the corresponding docker (0)bridge 
⑥ docker The bridge (Gateway) will send this packet to the corresponding POD-B

How many pod s do you run in the node

pod not only tomcat or nginx Or other application services
 It can also be services: L4pod Formal
ingress-nginx: L7 The agent management of the layer is http Traffic, using a controller to manage a set of pod Resources

How can the running environment in pod be provided for customer access?

How does pod interact and expose through the network
Method of managing pod (controller)
How to implement service discovery

What is Pod, controller type

Pod: first of all, the smallest constituent unit is a collection of containers. Application containers run in pod resources

##1. Resource list:
① K8S Concept of resources in
② Resource list format (resource list)/Profile: Yaml/grammar
③ Pod Life cycle ()
Pod phase
 Container probe---> liveness Probe ,readnessProbe
Pod hook
 Restart strategy
2,Pod Controller (maintenance) Pod Status, expected value)
What is a controller that uses different ways to control and manage different objects and their characteristics
 Controller description: 
ReplicaSet  	Ensure expected Pod Number of copies
**Deployment  	Stateless application deployment
**StatefulSet     Stateful application deployment
DaemonSet  		Ensure all Node Run the same Pod
Job  			One time task
Cronjob         Timed task
 Stateful applications: special states: storage required--->Apache,MySQL 
Stateless application: no special state: no storage is required--->Tomcat,Nginx

How does Pod expose its services

Expose services through the unified portal / defined access policy of service
The internal Pod communication of K8S is based on a group of private addresses, so it cannot directly provide access to clients (services and users) by default
Through Service discovery, we can expose our internal pod resources to the client (by exposing an IP: port), so that the client can access multiple pods in K8S in the form of this IP: port (usually a replica set of an application)


Service Classification:
① Stateless service: LVS 
The service does not depend on its own state, and the state data of the instance can be maintained in memory.
Any request can be processed by any instance. State data is not stored. Instances can expand horizontally and distribute requests to each node through load balancing. In a closed system, there is only one data closed loop. It usually exists in a single architecture cluster.
② Stateful services: for example, databases (need persistence)
The service itself depends on or has local state data, which needs to be persisted or can be recovered through other nodes. A request can only be processed by a node (or a node in the same state). Store state data, and the expansion of instances requires the whole system to participate in state migration.
In a closed system, there are multiple data closed loops, and the data consistency of these closed loops needs to be considered. It usually exists in distributed architecture.


Stateless service: it is a service without special status. Each request is processed uniformly and indiscriminately for the server. The request itself carries all the parameters required by all servers (the server itself does not store any data related to the request, excluding database storage information).
Stateful service: on the contrary, stateful service reserves the previously requested information at the server to process the current request, such as session

Simplify 2

Stateful: it needs persistence, and some information needs to be shared between multiple requests
Stateless: one time, no persistence is required, and each request is a new piece of data

For docker, stateless services are more suitable, and kubernetes provides a solution

Multiple storage types:
configmap (configuration management center), which mainly stores configuration files
Secret: user password and files to be encrypted
volume: basic data (web file)
⭐⭐ PV,PVC: dynamic creation process

K8S storage type, what do you know and where do you put your configuration files

k8s expect that all applications can run in POD
Virtualization is not suitable for those who eat too much resources (those who require particularly high use of resources)

In terms of the use of scheduler, how to specify node scheduling, and what is the workflow of scheduler?

Scheduler( Scheduler)
K8S Will automatically complete a new pod Dispatch to the corresponding node (preselected)/Optimization algorithm)
For the production environment, we often need to pod The creation process (such as the location of the node to be created) is managed,
For example:
  Specify node location creation Pod((specify schedule)
  Will be different Pod Create on a node (affinity)
  Will be different Pod Create on different nodes (anti affinity)
  According to their own needs, right pod Carry out node assembly, etc

⭐⭐ Label: label, attached to a resource, used to associate objects, query and filter labels

Create a POD 
1,to write pod yml Documents( nginx How resources run, what environment variables are used, whether resource constraints are required, and to which node) label: nginx 
2,hold pod Exposed, Service->adopt yml File definition label: nginx 
Through the same label Tags, associated, grouped together

Tags: Docker

Posted on Fri, 24 Sep 2021 08:35:44 -0400 by willchoong