catalogue
- 1, k8s multi node deployment
- 1.1 environmental preparation
- 1.1. master02 node deployment
- 1.2. Load balancing deployment (nginx realizes load balancing and keepalived realizes dual machine hot standby) (192.168.80.15 192.168.80.16)
- 1.2.1. Configure the official online Yum source of nginx and the yum source of local nginx
- 1.2.2 modify the nginx configuration file, configure the four layer reverse proxy load balancing, and specify the node ip and 6443 port of two master servers in the k8s cluster
- 1.2.3. Check configuration file syntax
- 1.2.4 start nginx service and check the monitored 6443 port
- 1.2.5 transfer nginx.conf to 192.168.80.16
- 1.2.6 deployment of keepalived service
- 1.2.7. Modify the keepalived configuration file
- 1.2.8. Create nginx status check script
- 1.2.9. Set kept.conf and check_nginx.sh to 192.168.80.16
- 1.2.9. Start the keepalived service (be sure to start the nginx service before starting the keepalived service)
- 1.2.10. Modify the bootstrap.kubeconfig on the node node. The kubelet.kubeconfig configuration file is VIP
- 1.2.11. Create pod test
- 1.2.12 operate on the node node of the corresponding network segment, which can be accessed directly by using the browser or curl command
- 1.2.13. At this time, view nginx logs on the master01 node and find that you do not have permission to view them
- 1.2.14 on the master01 node, grant the cluster admin role to the user system: anonymous
Continue with the previous blog
1, k8s multi node deployment
1.1 environmental preparation
master02 192.168.80.14 Load balancer 1:192.168.80.15 Load balancer 1:192.168.80.16 VIP :192.168.80.100
1.1. master02 node deployment
#Copy the certificate file, configuration file and service management file of each master component from the master 01 node to the master 02 node scp -r /opt/etcd/ root@192.168.80.14:/opt/ scp -r /opt/kubernetes/ root@192.168.80.14:/opt/ scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.80.14:/usr/lib/systemd/system/ #Modify the IP address in the configuration file Kube apiserver vim /opt/kubernetes/cfg/kube-apiserver --bind-address=192.168.80.14 #modify --advertise-address=192.168.80.14 #Start the services on the master02 node and set the startup and self startup systemctl enable --now kube-apiserver.service kube-controller-manager.service kube-scheduler.service #View node status ln -s /opt/kubernetes/bin/* /usr/local/bin/ kubectl get nodes kubectl geet nodes -o wide #-o wide: output additional information. For pod, the node name where the pod is located will be output #At this time, the node node status found in the master 02 node is only the information found from the etcd. At this time, the node node does not actually establish a communication connection with the master 02 node. Therefore, a VIP needs to be used to associate the node node with the master node
1.2. Load balancing deployment (nginx realizes load balancing and keepalived realizes dual machine hot standby) (192.168.80.15 192.168.80.16)
1.2.1. Configure the official online Yum source of nginx and the yum source of local nginx
cat > /etc/yum.repos.d/nginx.repo << 'EOF' [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 EOF yum install nginx -y
1.2.2 modify the nginx configuration file, configure the four layer reverse proxy load balancing, and specify the node ip and 6443 port of two master servers in the k8s cluster
vim /etc/nginx/nginx.conf events { worker_connections 1024; } #add to stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.80.11:6443; server 192.168.80.14:6443; } server { listen 6443; proxy_pass k8s-apiserver; } } http { ......
1.2.3. Check configuration file syntax
nginx -t
1.2.4 start nginx service and check the monitored 6443 port
systemctl enable --now nginx netstat -natp | grep nginx
1.2.5 transfer nginx.conf to 192.168.80.16
scp /etc/nginx/nginx.conf root@192.168.80.16:/etc/nginx/nginx.conf
1.2.6 deployment of keepalived service
yum install keepalived -y
1.2.7. Modify the keepalived configuration file
vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { # Receiving email address notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # Email address notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER #lb01 node is NGINX_MASTER, lb02 node is NGINX_BACKUP } #Add a script that executes periodically vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" #Specifies the script path to check nginx survival } vrrp_instance VI_1 { state MASTER #MASTER for node lb01 and BACKUP for node lb02 interface ens33 #Specify the network card name ens33 virtual_router_id 51 #Specify vrid, and the two nodes should be consistent priority 100 #100 for lb01 node and 90 for lb02 node advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.80.100/24 #Specify VIP } track_script { check_nginx #Specify vrrp_script configured script } }
1.2.8. Create nginx status check script
vim /etc/nginx/check_nginx.sh #!/bin/bash #egrep -cv "grep $$" is used to filter out the current Shell process ID containing grep or $$ count=$(ps -ef | grep nginx | egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi chmod +x /etc/nginx/check_nginx.sh
1.2.9. Set kept.conf and check_nginx.sh to 192.168.80.16
scp check_nginx.sh keepalived.conf root@192.168.80.16:`pwd`
1.2.9. Start the keepalived service (be sure to start the nginx service before starting the keepalived service)
systemctl start keepalived systemctl enable keepalived ip a
1.2.10. Modify the bootstrap.kubeconfig on the node node. The kubelet.kubeconfig configuration file is VIP
cd /opt/kubernetes/cfg/ vim bootstrap.kubeconfig server: https://192.168.80.100:6443 vim kubelet.kubeconfig server: https://192.168.80.100:6443 vim kube-proxy.kubeconfig server: https://192.168.80.100:6443 //Restart kubelet and Kube proxy services systemctl restart kubelet.service systemctl restart kube-proxy.service
1.2.11. Create pod test
#Created on master01 kubectl create deploy nginx-text --image=nginx #Create on master02 kubectl create deploy nginx-master2 --image=nginx
1.2.12 operate on the node node of the corresponding network segment, which can be accessed directly by using the browser or curl command
curl 172.17.72.2
1.2.13. At this time, view nginx logs on the master01 node and find that you do not have permission to view them
kubectl logs nginx-text-78cc774878-4wv7n
1.2.14 on the master01 node, grant the cluster admin role to the user system: anonymous
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous