CentOS 7.4 builds Kubernetes 1.8.5 cluster

Environment introduction role operating system IP host ...

Environment introduction

role operating system IP host name Docker version master,node CentOS 7.4 192.168.0.210 node210 17.11.0-ce node CentOS 7.4 192.168.0.211 node211 17.11.0-ce node CentOS 7.4 192.168.0.212 node212 17.11.0-ce

1. Basic environment configuration (performed by all servers)
a.SELinux off

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux setenforce 0

b.Docker installation

curl -sSL https://get.docker.com/ | sh

c. Configure domestic Docker image accelerator

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://e2a6d434.m.daocloud.io

d. Turn on Docker and start automatically

systemctl enable docker.service systemctl restart docker

2.kubernetes certificate preparation (master execution)
a. To copy files to Node nodes and save deployment time, I do ssh trust free replication here

ssh-keygen -t rsa ssh-copy-id 192.168.0.211 ssh-copy-id 192.168.0.212

b. Download certificate generation tool

yum -y install wget wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 chmod +x cfssl_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 mv cfssljson_linux-amd64 /usr/local/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

c.CA certificate production
#Catalog preparation

mkdir /root/ssl cd /root/ssl

#Create CA certificate configuration
vim ca-config.json

{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } }

#Create CA certificate request file
vim ca-csr.json

{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "JIANGXI", "L": "NANCHANG", "O": "k8s", "OU": "System" } ] }

#Generate CA certificate and private key
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#Create kubernetes certificate signing request
vim kubernetes-csr.json

{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.0.210", #Modify the IP address of the host "192.168.0.211", #Modify the IP address of the host "192.168.0.212", #Modify the IP address of the host "10.254.0.1", "kubernetes", "node210", #Change to the host name of your own host "node211", #Change to the host name of your own host "node212", #Change to the host name of your own host "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "JIANGXI", "L": "JIANGXI", "O": "k8s", "OU": "System" } ] }

#Generate kubernetes certificate and private key
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

#Create admin certificate signing request
vim admin-csr.json

{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "JIANGXI", "L": "JIANGXI", "O": "system:masters", "OU": "System" } ] }

#Generate admin certificate and private key
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#Create Kube proxy certificate signing request
vim kube-proxy-csr.json

{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "JIANGXI", "L": "JIANGXI", "O": "k8s", "OU": "System" } ] }

#Generate certificate and private key
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#Distribute certificate

mkdir -p /etc/kubernetes/ssl cp -r *.pem /etc/kubernetes/ssl cd /etc scp -r kubernetes/ 192.168.0.211:/etc/ scp -r kubernetes/ 192.168.0.212:/etc/

3.etcd cluster installation and configuration
a. Download etcd and distribute to nodes

wget https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz tar zxf etcd-v3.2.11-linux-amd64.tar.gz mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin scp -r /usr/local/bin/etc* 192.168.0.211:/usr/local/bin/ scp -r /usr/local/bin/etc* 192.168.0.212:/usr/local/bin/

b. Create etcd service startup file
vim /usr/lib/systemd/system/etcd.service

[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/local/bin/etcd \ --name $ \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls $ \ --listen-peer-urls $ \ --listen-client-urls $,http://127.0.0.1:2379 \ --advertise-client-urls $ \ --initial-cluster-token $ \ --initial-cluster infra1=https://192.168.0.210:2380,infra2=https://192.168.0.211:2380,infra3=https://192.168.0.212:2380 \ --initial-cluster-state new \ --data-dir=$ Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target

c. Create necessary directories

mkdir -p /var/lib/etcd/ mkdir /etc/etcd

d. Edit the configuration file of etcd
vim /etc/etcd/etcd.conf
Configuration file / etc / etcd of node210/ etcd.conf by

# [member] ETCD_NAME=infra1 ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.0.210:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.0.210:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.210:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.210:2379"

Configuration file / etc / etcd of node211/ etcd.conf by

# [member] ETCD_NAME=infra2 ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.0.211:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.0.211:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.211:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.211:2379"

Configuration file / etc / etcd of node212/ etcd.conf by

# [member] ETCD_NAME=infra3 ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="https://192.168.0.212:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.0.212:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.212:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.212:2379"

#Execute at all nodes, start etcd

systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd

If an error is reported, you need to view the / var/log/messages file for troubleshooting

e. Test whether the cluster is normal

verification ETCD Start successfully or not etcdctl \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ cluster-health

4. Configure kubernetes parameter
a. Download kubernetes compiled binaries and distribute them

wget https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gz tar zxf kubernetes-server-linux-amd64.tar.gz cp -rf kubernetes/server/bin/ /usr/local/bin/ scp -r kubernetes/server/bin/ 192.168.0.211:/usr/local/bin/ scp -r kubernetes/server/bin/ 192.168.0.212:/usr/local/bin/

#To view the latest version of kubernetes, go to https://github.com/kubernetes/kubernetes/releases
Then enter CHANGELOG-x.x.md to restrict the binary download address

b. Create TLS Bootstrapping Token

cd /etc/kubernetes export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF $,kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF

c. Create kubelet bootstrapping kubeconfig file

cd /etc/kubernetes export KUBE_APISERVER="https://192.168.0.210:6443"

#Set cluster parameters

kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=$ \ --kubeconfig=bootstrap.kubeconfig

#Set client authentication parameters

kubectl config set-credentials kubelet-bootstrap \ --token=$ \ --kubeconfig=bootstrap.kubeconfig

#Set context parameters

kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig

#Set default context
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#Authorize kubelet bootstrap role

kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap

d. Create the Kube proxy kubeconfig file
export KUBE_APISERVER="https://192.168.0.210:6443"
#Set cluster parameters

kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=$ \ --kubeconfig=kube-proxy.kubeconfig

#Set client authentication parameters

kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig

#Set context parameters

kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig

#Set default context
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

e. Create kubectl kubeconfig file
export KUBE_APISERVER="https://192.168.0.210:6443"
#Set cluster parameters

kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=$

#Set client authentication parameters

kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem

#Set context parameters

kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin

#Set default context
kubectl config use-context kubernetes

f. Add 2 bootstrap.kubeconfig kube- proxy.kubeconfig File distribution to other servers

scp -r *.kubeconfig 192.168.0.211:/etc/kubernetes/ scp -r *.kubeconfig 192.168.0.212:/etc/kubernetes/

5.MASTER installation and configuration
a.apiserver installation configuration
#apiserver service startup file
vim /usr/lib/systemd/system/kube-apiserver.service


[Unit] Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/local/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target

#Configure kubernetes default configuration
vim /etc/kubernetes/config

### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver #KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080" KUBE_MASTER="--master=http://192.168.0.210:8080"

#Configure the apiserver parameter
vim /etc/kubernetes/apiserver

### ## kubernetes system config ## ## The following values are used to configure the kube-apiserver ## # ## The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com" KUBE_API_ADDRESS="--advertise-address=192.168.0.210 --bind-address=192.168.0.210 --insecure-bind-address=192.168.0.210" # ## The port on the local server to listen on. #KUBE_API_PORT="--port=8080" # ## Port minions listen on #KUBELET_PORT="--kubelet-port=10250" # ## Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379" # ## Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # ## default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" # ## Add your own! KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h --allow-privileged=true" #If an error occurs, check / var/log/messages

#Start apiserver

systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver

#If an error occurs, check / var/log/messages

b. Controller manager service configuration
#Controller manager service startup file
vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/local/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

#Configure the controller manager service profile
vim /etc/kubernetes/controller-manager

#The following values are used to configure the kubernetes controller-manager #defaults from config and apiserver should be adequate #Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

#Start the controller-manager service
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager


c.scheduler service installation and configuration
#Configure the scheduler service startup file
vim /usr/lib/systemd/system/kube-scheduler.service

[Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/local/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

#Configure the scheduler service profile
vim /etc/kubernetes/scheduler

#kubernetes scheduler config #default config should be adequate #Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

#Start the scheduler service

systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler

d. Test whether the master is normal
kubectl get componentstatuses
#The results are as follows

NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}

6.node installation (all nodes)
a.flannel installation and configuration (we use flannel for container network)
#Install flannel, yum
yum install -y flannel
#Check node certificate
ls /etc/kubernetes/ssl
#Modification flannel.service The configuration file is as follows
vi /usr/lib/systemd/system/flanneld.service






[Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start \ -etcd-endpoints=$ \ -etcd-prefix=$ \ $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service

#Modify the flannel configuration file
vi /etc/sysconfig/flanneld

# Flanneld configuration options # etcd url location. Point this to the server where etcd runs ETCD_ENDPOINTS="https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

#Create network configuration in etcd

etcdctl --endpoints=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kube-centos/network etcdctl --endpoints=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

#flannel service start
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld



b. Configure Docker service startup file and integrate flannel
vim /usr/lib/systemd/system/docker.service

stay ExecStart Up EnvironmentFile=-/run/flannel/docker EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/subnet.env //Amend to read ExecStart=/usr/bin/dockerd --bip=$ --mtu=$

The effect is as follows:

[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=-/run/flannel/docker EnvironmentFile=-/run/docker_opts.env EnvironmentFile=-/run/flannel/subnet.env ExecStart=/usr/bin/dockerd --bip=$ --mtu=$ ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target

#Restart Docker service

systemctl daemon-reload systemctl restart docker systemctl status docker

Remember to start flannel first and then Docker

c. Query whether etcd allocates network

etcdctl --endpoints=$ \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kube-centos/network/subnets

The results are as follows

/kube-centos/network/subnets/172.30.1.0-24 /kube-centos/network/subnets/172.30.54.0-24 /kube-centos/network/subnets/172.30.99.0-24
etcdctl --endpoints=$ \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ get /kube-centos/network/config

The results are as follows

{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

d. Install and configure kubelet
#Create kubelet service startup file
vim /usr/lib/systemd/system/kubelet.service

[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target

#kubelet authentication profile
vim /etc/kubernetes/kubelet.kubeconfig

apiVersion: v1 kind: Config clusters: - cluster: server: http://192.168.0.210:8080 name: local contexts: - context: cluster: local name: local current-context: local

#kubelet profile
vim /etc/kubernetes/kubelet

The contents of / etc/kubernetes/kubelet under node210 are as follows

### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.0.210" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.0.210" # ## location of the api-server #KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080" KUBELET_API_SERVER=" " # ## pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause" # ## Add your own! #KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false" KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false --allow-privileged=true"

The configuration file in node211 is as follows

### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.0.211" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.0.211" # ## location of the api-server #KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080" KUBELET_API_SERVER=" " # ## pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause" # ## Add your own! #KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false" KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false --allow-privileged=true"

The configuration file in node212 is as follows

### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.0.212" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.0.212" # ## location of the api-server #KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080" KUBELET_API_SERVER=" " # ## pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause" # ## Add your own! #KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false" KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false --allow-privileged=true"

#Start kubelet service

mkdir -p /var/lib/kubelet systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet

#It's easy to make mistakes here. When there are errors, check / var/log/messages to see the logs for troubleshooting

#Check whether kubelet service is normal

kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.0.210 Ready <none> 14h v1.8.5 192.168.0.211 Ready <none> 14h v1.8.5 192.168.0.212 Ready <none> 14h v1.8.5

c. Install and configure Kube proxy
#Configure the Kube proxy service startup file
vim /usr/lib/systemd/system/kube-proxy.service

[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/local/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

#The Kube proxy configuration file is as follows:
node210:
vim /etc/kubernetes/proxy

### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.0.210 --hostname-override=192.168.0.210 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node211:
vim /etc/kubernetes/proxy

### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.0.211 --hostname-override=192.168.0.211 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node212:
vim /etc/kubernetes/proxy

### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=192.168.0.212--hostname-override=192.168.0.212 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

#Start the Kube proxy service

systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy

d. Turn forward to accept by default on all nodes
vim /usr/lib/systemd/system/forward.service

[Unit] Description=iptables forward Documentation=http://iptables.org/ After=network.target docker.service [Service] Type=forking ExecStart=/usr/sbin/iptables -P FORWARD ACCEPT ExecReload=/usr/sbin/iptables -P FORWARD ACCEPT ExecStop=/usr/sbin/iptables -P FORWARD ACCEPT PrivateTmp=true [Install] WantedBy=multi-user.target

#Start forward service

systemctl daemon-reload systemctl enable forward systemctl start forward systemctl status forward

7. Test whether the cluster works normally
a. Create a deploy
kubectl run nginx --replicas=2 --labels="run=nginx-service" --image=nginx --port=80
b. Map service to Internet accessible
kubectl expose deployment nginx --type=NodePort --name=nginx-service
c. View service status




kubectl describe svc nginx-service Name: nginx-service Namespace: default Labels: run=nginx-service Annotations: <none> Selector: run=nginx-service Type: NodePort IP: 10.254.84.99 Port: <unset> 80/TCP NodePort: <unset> 30881/TCP Endpoints: 172.30.1.2:80,172.30.54.2:80 Session Affinity: None Events: <none>

d. View pods startup

kubectl get pods NAME READY STATUS RESTARTS AGE nginx-2317272628-nsfrr 1/1 Running 0 1m nginx-2317272628-qbbgg 1/1 Running 0 1m

e. Through the Internet
http://192.168.0.210:30881
http://192.168.0.211:30881
http://192.168.0.212:30881
Can access nginx page



If you cannot access it, you can check whether the forward chain is enabled through iptables -nL

Problem handling:
1.kubelet: E0428 14:59:15.715224 1078 pod_workers.go:186] Error syncing pod aecdf24b-4ab0-11e8-90c5-000c2935cc91 ("kube-router-fzxz7_kube-system(aecdf24b-4ab0-11e8-90c5-000c2935cc91)"), skipping: pod cannot be run: pod with UID "aecdf24b-4ab0-11e8-90c5-000c2935cc91" specified privileged container, but is disallowed

Processing method: Open -- allow privileged = true in the configuration files of apiserver and kubelet

2.Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"

Processing method: add -- runtime cggroups = / SYSTEMd to kubelet's configuration file/ system.slice --kubelet-cgroups=/systemd/ system.slice

3.connections, error: error deleting connection tracking state for UDP service IP: 10.254.0.2, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH

Processing method: if the conntrack tools are not installed, execute Yum - y install conntrack tools

21 June 2020, 07:06 | Views: 2624

Add new comment

For adding a comment, please log in
or create account

0 comments