k8s introduction and cluster construction and deployment

About Kubernetes

Chinese document: http://docs.kubernetes.org.cn/

Kubernetes is an open source platform for automatic deployment, expansion and operation and maintenance of container clusters. With kubernetes, you can quickly and effectively respond to user needs; deploy your applications quickly and expectedly; expand your applications rapidly; seamlessly connect new application functions; save resources and optimize the use of hardware resources. It provides a complete open source solution for container choreography management.

  • With the rapid development of Docker as a high-level container engine, in Google, container technology has been applied to
    For many years, Borg system is running and managing thousands of container applications.
  • Kubernetes project originates from Borg, which can be said to be the essence of Borg design thought and absorb.
    The experiences and lessons of Borg system are presented.
  • Kubernetes abstracts computing resources at a higher level by combining containers in detail,
    Deliver the final application service to the user.
  • Benefits of Kubernetes:
    • Hidden resource management and error handling, users only need to pay attention to application development.
    • The service is highly available and reliable.
    • The load can be run in a cluster of thousands of machines.

k8s application, containerization deployment and binary deployment, there are many applications of containers.

kubernetes design architecture


The Kubernetes cluster contains node agent kubelet and Master components (APIs, scheduler, etcd,), all based on distributed storage system.

  • Kubernetes is mainly composed of the following core components:
assembly function
• etcd: Save the state of the whole cluster
• apiserver: It provides a unique entry for resource operation, and provides mechanisms such as authentication, authorization, access control, API registration and discovery
• controller manager: Responsible for maintaining the status of the cluster, such as fault detection, automatic expansion, rolling update, etc
• scheduler: Be responsible for resource scheduling, and schedule Pod to the corresponding machine according to the scheduled scheduling strategy
• kubelet: Responsible for maintaining the life cycle of the container, as well as Volume(CVI) and network (CNI) management
• Container runtime: Responsible for image management and real operation (CRI) of Pod and container
• kube-proxy: Responsible for providing Service discovery and load balancing within the cluster

• in addition to the core components, there are some recommended add ons:

kube-dns: Responsible for providing DNS services for the whole cluster. The latest k8s version has been integrated
Ingress Controller: Provide Internet access for services
Heapster: Provide resource monitoring
Dashboard: Provide GUI
Federation: Provide clusters across zones
Fluentd-elasticsearch: Provide cluster log collection, storage and query
  • Kubernetes design concept and function is actually a layered architecture similar to Linux

Core layer: The core function of Kubernetes is to provide external API to build high-level applications and internal plug-in application execution environment
Application layer: Deployment (stateless application, stateful application, batch task, cluster application, etc.) and routing (service discovery, DNS resolution, etc.)
Management: System metrics (such as infrastructure, container, and network metrics), automation (such as auto scaling, dynamic provisioning, etc.), and policy management (RBAC, Quota, PSP, NetworkPolicy, etc.)
Interface layer: kubectl command line tool, CLIENT SDK and cluster Federation
ecosystem: The large container cluster management and scheduling ecosystem on the interface layer can be divided into two categories
  • ecosystem:
    • Kubernetes external: logging, monitoring, configuration management, CI, CD, Workflow, FaaS
      OTS application, ChatOps, etc
    • Kubernetes internal: CRI, CNI, CVI, image warehouse, Cloud Provider, cluster itself
      Configuration and management of

Kubernetes deployment

reference resources: https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Here we need to use harbor warehouse, because the speed of pulling from the local warehouse is faster than pulling from the Internet:

Environmental Science:
server1: 172.25.254.1 harbor warehouse
server2: 172.25.254.2 master node
server3: 172.25.254.3 node
server4: 172.25.254.4 node

On server2, 3, 4 hosts:
Turn off selinux and iptables firewall of the node

  • All nodes deploy docker engine: deploy and install from Alibaba cloud.
# step 1: install some necessary system tools
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: add software source information
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: update and install docker CE
yum -y install docker-ce		# Need container SELinux dependency
[root@server1 yum.repos.d]# cat /etc/sysctl.d/bridge.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1		# Kernel support.
[root@server1 yum.repos.d]# scp /etc/sysctl.d/bridge.conf server2:/etc/sysctl.d/
 
[root@server1 yum.repos.d]# scp /etc/sysctl.d/bridge.conf server3:/etc/sysctl.d/

[root@server1 yum.repos.d]# scp /etc/sysctl.d/bridge.conf server4:/etc/sysctl.d/
# Let these two parameters take effect
[root@server2 ~]# sysctl --system
[root@server3 ~]# sysctl --system
[root@server4 ~]# sysctl --system

 systemctl enable --now docker		# Open the docker service of three nodes
  • Change docker and k8s to use the same control method:
[root@server2 ~]# docker info
 Cgroup Driver: cgroupfs		# docker was originally controlled by cgroup. We need to change it to systemd
 
[root@server2 packages]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

[root@server2 packages]# scp /etc/docker/daemon.json server3:/etc/docker/
root@server3's password: 
daemon.json                                                                                                                                                                      100%  201   238.1KB/s   00:00    
[root@server2 packages]# scp /etc/docker/daemon.json server4:/etc/docker/
root@server4's password: 
daemon.json          

[root@server2 packages]# systemctl restart docker
 Cgroup Driver: systemd		# Became the way of systemd
  • Disable swap partition:
#Disable swap partition for better performance
[root@server3 ~]# swapoff -a			#Server 2, 3, 4
[root@server3 ~]# vim /etc/fstab 
[root@server3 ~]# vim /etc/fstab 
[root@server3 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Apr 28 02:35:30 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=004d1dd6-221a-4763-a5eb-c75e18655041 /boot                   xfs     defaults        0 0
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

  • Install deployment software kubeadm:
    We download from Alibaba cloud:
[root@server2 yum.repos.d]# vim k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

[root@server2 yum.repos.d]#yum install -y kubelet kubeadm kubectl		# kubectl only needs to be installed in the master node

The other two nodes do the same operation.

[root@server2 yum.repos.d]# systemctl enable --now kubelet.service 	#View default configuration information
imageRepository: k8s.gcr.io		
# The default value is k8s gcr.io You need to climb over the wall to download the component image, so you need to modify the image warehouse:

[root@server2 yum.repos.d]# kubeadm config images list 		# List required mirrors
W0618 15:03:59.486677   14931 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.4
k8s.gcr.io/kube-controller-manager:v1.18.4
k8s.gcr.io/kube-scheduler:v1.18.4
k8s.gcr.io/kube-proxy:v1.18.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

# List in designated Alibaba cloud warehouse
[root@server2 yum.repos.d]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
W0618 15:04:21.098999   14946 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.4
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.4
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.4
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.4
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7

[root@server2 yum.repos.d]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=1.18.3
# Pull image

[root@server2 yum.repos.d]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.3             3439b7546f29        4 weeks ago         117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.3             7e28efa976bd        4 weeks ago         173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.3             da26705ccb4b        4 weeks ago         162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.3             76216c34ed0c        4 weeks ago         95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        4 months ago        683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        4 months ago        43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        7 months ago        288MB

Then we put these images into the harbor warehouse for our other nodes to pull.

[root@server1 yum.repos.d]# scp -r /etc/docker/certs.d/ server2:/etc/docker/
root@server2's password: 		# Give the certificate of harbor to server2
ca.crt  

[root@server2 yum.repos.d]# vim /etc/hosts
[root@server2 yum.repos.d]# cat /etc/hosts
172.25.254.1	server1	reg.caoaoyuan.org			# Analyze the harbor warehouse.
[root@server2 yum.repos.d]# docker login reg.caoaoyuan.org
Username: admin
Password: 
Login Succeeded		# Log in

[root@server2 ~]# docker images |grep reg.ca | awk '{print $1":"$2}'
reg.caoaoyuan.org/library/kube-proxy:v1.18.3		#Label the image above like this.
reg.caoaoyuan.org/library/kube-apiserver:v1.18.3
reg.caoaoyuan.org/library/kube-controller-manager:v1.18.3
reg.caoaoyuan.org/library/kube-scheduler:v1.18.3
reg.caoaoyuan.org/library/pause:3.2
reg.caoaoyuan.org/library/coredns:1.6.7
reg.caoaoyuan.org/library/etcd:3.4.3-0


# Upload to harbor warehouse
[root@server2 ~]# for i in `docker images |grep reg.ca | awk '{print $1":"$2}'`;do dicker push $i ;done		
# Delete Alibaba cloud image
[root@server2 ~]# for i in `docker images |grep regis | awk '{print $1":"$2}'`;do docker rmi $i ;done


Upload succeeded. Other nodes can be pulled. Pay attention to putting the certificate first and local parsing:

[root@server1 harbor]# scp -r /etc/docker/certs.d/ server3:/etc/docker/
root@server3's password: 
ca.crt                                                                                                                                                                           100% 2114    39.7KB/s   00:00    
[root@server1 harbor]# scp -r /etc/docker/certs.d/ server4:/etc/docker/
root@server4's password: 
ca.crt  		# These two nodes have not been placed

Perform cluster initialization at the master node:

[root@server2 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.caoaoyuan.org/library/ 
Your Kubernetes control-plane has initialized successfully!
--kubernetes-version=1.18.3
kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg \
    --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597
//The token (used to join the cluster) and hash code (used to verify the master side) are generated. The token is saved 24H by default
[kubeadm@server2 ~]$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
61xkmb.qd1alzh6winolaeg   23h         2020-06-19T17:31:47+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# After expiration, you can use kubeadm token create to generate.

The official suggestion is that we use ordinary user operation cluster. We only need to:

[root@server2 ~]# useradd kubeadm
[root@server2 ~]# visudo 	# Delegate authority to kubeadm
su[root@server2 ~]# su - kubeadm 
[kubeadm@server2 ~]$ mkdir -p $HOME/.kube
[kubeadm@server2 ~]$ sudo  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config		#It's actually a certificate.
#Put the authentication over and you can manipulate the cluster
[kubeadm@server2 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS     ROLES    AGE   VERSION	
server2   NotReady   master   12m   v1.18.3	#Currently, there are only master nodes and they are not ready

Expand the capacity of the node, add server3 and server4 to server2:

sysctl -w net.ipv4.ip_forward=1		#You may need to execute this command
[root@server4 ~]# kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg     --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597
[root@server3 ~]# kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg     --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597

[kubeadm@server2 ~]$ kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
server2   NotReady   master   20m   v1.18.3
server3   NotReady   <none>   88s   v1.18.3
server4   NotReady   <none>   31s   v1.18.3
# There are two nodes added.


  • To install the flannel network components:
[root@server2 demo]# docker images
quay.io/coreos/flannel                              v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB
# Import this network component at both nodes 3 and 4

# Switch to kubeadm user to apply this file.
[kubeadm@server2 ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[kubeadm@server2 ~]$ kubectl get pod -n kube-system		#, system components whose pod s are all isolated in the way of namespace
NAME                              READY   STATUS    RESTARTS   AGE
coredns-5fd54d7f56-22fwz          1/1     Running   0          123m
coredns-5fd54d7f56-l9z5k          1/1     Running   0          123m
etcd-server2                      1/1     Running   3          124m
kube-apiserver-server2            1/1     Running   2          124m
kube-controller-manager-server2   1/1     Running   3          124m
kube-flannel-ds-amd64-6t4tp       1/1     Running   0          9m31s
kube-flannel-ds-amd64-gk9r2       1/1     Running   0          9m31s		# Network components
kube-flannel-ds-amd64-mlcvm       1/1     Running   0          9m31s
kube-proxy-f7rnh                  1/1     Running   0          104m
kube-proxy-hww5t                  1/1     Running   1          104m
kube-proxy-wn4h8                  1/1     Running   3          123m
kube-scheduler-server2            1/1     Running   3          124m
# It's all running

[kubeadm@server2 ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
server2   Ready    master   125m   v1.18.3
server3   Ready    <none>   106m   v1.18.3
server4   Ready    <none>   105m   v1.18.3		#ready
# We can use this cluster
  • View namespace
[kubeadm@server2 ~]$ kubectl get pod --all-namespaces	#View all namespaces
[kubeadm@server2 ~]$ kubectl get pod -o wide -n kube-system	# -o wide view details
NAME                              READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-5fd54d7f56-22fwz          1/1     Running   0          3h34m   10.244.2.2     server4   <none>           <none>
coredns-5fd54d7f56-l9z5k          1/1     Running   0          3h34m   10.244.1.2     server3   <none>           <none>
etcd-server2                      1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
kube-apiserver-server2            1/1     Running   2          3h34m   172.25.254.2   server2   <none>           <none>
kube-controller-manager-server2   1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
kube-flannel-ds-amd64-6t4tp       1/1     Running   0          100m    172.25.254.3   server3   <none>           <none>
kube-flannel-ds-amd64-gk9r2       1/1     Running   0          100m    172.25.254.2   server2   <none>           <none>
kube-flannel-ds-amd64-mlcvm       1/1     Running   0          100m    172.25.254.4   server4   <none>           <none>
kube-proxy-f7rnh                  1/1     Running   0          3h14m   172.25.254.4   server4   <none>           <none>
kube-proxy-hww5t                  1/1     Running   1          3h15m   172.25.254.3   server3   <none>           <none>
kube-proxy-wn4h8                  1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
kube-scheduler-server2            1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
//You can see where these components are running. The flannel component uses the controller of dameset. Its characteristic is that each node runs one.
proxy There are also at each node.

[root@server4 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
reg.caoaoyuan.org/library/kube-proxy   v1.18.3             3439b7546f29        4 weeks ago         117MB
quay.io/coreos/flannel                 v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB
reg.caoaoyuan.org/library/pause        3.2                 80d28bedfe5d        4 months ago        683kB
reg.caoaoyuan.org/library/coredns      1.6.7               67da37a9a360        4 months ago        43.8MB
server3 and server4 Join the cluster and get harbor The warehouse information pulls these images, kubernete It is ready to run. All services are run as containers.
  • Auto complement
[kubeadm@server2 ~]$  echo "source <(kubectl completion bash)" >> ~/.bashrc
[kubeadm@server2 ~]$ logout
[root@server2 demo]# su - kubeadm 
Last login: Thu Jun 18 19:26:19 CST 2020 on pts/0
[kubeadm@server2 ~]$ kubectl 
alpha          apply          certificate    convert   		#You can make it up automatically.
  • Delete Vertex
[kubeadm@server2 ~]$ kubectl drain server4 --delete-local-data --force --ignore-daemonsets
kunode/server4 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-mlcvm, kube-system/kube-proxy-f7rnh
evicting pod kube-system/coredns-5fd54d7f56-22fwz
bec	pod/coredns-5fd54d7f56-22fwz evicted
node/server4 evicted
[kubeadm@server2 ~]$ kubectl get nodes
NAME      STATUS                     ROLES    AGE     VERSION
server2   Ready                      master   3h56m   v1.18.3
server3   Ready                      <none>   3h37m   v1.18.3
server4   Ready,SchedulingDisabled   <none>   3h36m   v1.18.3		# Do not call this node again
[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS                     ROLES    AGE     VERSION
server2   Ready                      master   3h56m   v1.18.3
server3   Ready                      <none>   3h37m   v1.18.3
server4   Ready,SchedulingDisabled   <none>   3h36m   v1.18.3	
[kubeadm@server2 ~]$ kubectl delete node server4		# Delete Vertex 
node "server4" deleted
[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS   ROLES    AGE     VERSION
server2   Ready    master   3h57m   v1.18.3
server3   Ready    <none>   3h38m   v1.18.3
[kubeadm@server2 ~]$

This only applies to nodes that have joined normally. For nodes that have not joined normally, execute directly on that node:

[root@server4 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks

//Clear the information about the join

//If you want to join us again:
[kubeadm@server2 ~]$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
61xkmb.qd1alzh6winolaeg   19h `Not yet expired`  2020-06-19T17:31:47+08:00 
[root@server4 ~]# kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg     --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597
//It's OK to add it again, provided that all the configurations in the above nodes are done well.

[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS   ROLES    AGE     VERSION
server2   Ready    master   4h3m    v1.18.3
server3   Ready    <none>   3h43m   v1.18.3
server4   Ready    <none>   2m1s    v1.18.3
  • Delete the flannel network component
[root@server4 ~]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
56862b391eda        4e9f801d2217                          "/opt/bin/flanneld -..."   33 minutes ago      Up 33 minutes                           k8s_kube-flannel_kube-flannel-ds-amd64-sklll_kube-system_84e2eb08-2b85-4cc2-a167-5ea78629af3c_1
[root@server4 ~]# docker rm -f 56862b391eda
56862b391eda
[root@server4 ~]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                  PORTS               NAMES
f7db2b985cc5        4e9f801d2217                          "/opt/bin/flanneld -..." 
//The cluster will monitor the status when things happen. When a service is shut down, it will automatically restart the service all the time.
  • Create a pod
[kubeadm@server2 ~]$ kubectl run demo --image=nginx
kubecpod/demo created
[kubeadm@server2 ~]$ kubectl  get pod
NAME   READY   STATUS              RESTARTS   AGE
demo   0/1     ContainerCreating   0          5s
[kubeadm@server2 ~]$ kubectl logs demo 
Error from server (BadRequest): container "demo" in pod "demo" is waiting to start: ContainerCreating
[kubeadm@server2 ~]$ kubectl describe pod demo 		#View pod details
Name:         demo
Namespace:    default
Priority:     0
Node:         server3/172.25.254.3
IP:           10.244.1.3
IPs:
  IP:  10.244.1.3
  Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/demo to server3
  Normal  Pulling    47s        kubelet, server3   Pulling image "nginx"
  Normal  Pulled     19s        kubelet, server3   Successfully pulled image "nginx"
  Normal  Created    19s        kubelet, server3   Created container demo
  Normal  Started    18s        kubelet, server3   Started container demo
  [kubeadm@server2 ~]$ kubectl logs demo	#View pod log
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
[kubeadm@server2 ~]$ kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP           NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          8m58s   10.244.1.3   server3   <none>           <none>
# It's already running
[kubeadm@server2 ~]$ curl 10.244.1.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
[kubeadm@server2 ~]$ kubectl delete pod demo 
pod "demo" deleted

Then we go to server3 and server4 to configure the warehouse:

[root@server3 ~]# vim /etc/docker/daemon.json
[root@server4 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.caoaoyuan.org"],		# Add this line in.
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
[root@server3 ~]# systemctl restart docker

At this time, the pull image will be pulled from our harbor warehouse. The configuration of our cluster is basically ok.

Tags: Docker Kubernetes yum kubelet

Posted on Thu, 18 Jun 2020 22:33:27 -0400 by bengaltgrs