Kubernetes 1.17.3 installation - Ultra detailed installation steps

For the students who are new to kubernetes (hereinafter referred to as k8s), the first step is to install k8s, so Chaochao wrote an article for everyone to record the installation process. If you are new to Xiaobai, as long as you do it, you can also ensure 100% completion of the installation. Let's start our installation journey. There are a lot of contents, which are dry goods, without any routines~~

If you feel tired, please take a look at the following sentence: we want to go to places, there will never be a shortcut, only down-to-earth, step by step to the poetry and distance!

Friendly tips:

1. The github address of the yaml file required below:

https://github.com/luckylucky421/kubernetes1.17.3/tree/master

You can fork my github warehouse into your own warehouse. Don't forget to click star on my github~~

2. The following image acquisition methods are required: the image on Baidu online disk is large, and the pulling speed may be slow. If you want to obtain the image quickly, you can get it by the end of the article

Link: https://pan.baidu.com/s/1uclniclnidre5niomqvxig 
Extraction code: xk3y

1, Prepare the experimental environment

1. Prepare two centos7 virtual machines to install k8s cluster. The following is the configuration of the two virtual machines

K8s master (192.168.124.16) configuration:

Operating system: CentOS 7.4, CentOS 7.5, CentOS 7.6 and later
 Configuration: 4-core cpu, 8G memory, two 60G hard disks
 Networks: bridging networks

K8s node (192.168.124.26) configuration:

Operating system: CentOS 7.6
 Configuration: 4-core cpu, 4G memory, two 60G hard disks
 Networks: bridging networks

2, Initialize the experimental environment

1. Configure static ip

Configure the virtual machine or physical machine as a static ip address, so that the ip address will not change after the machine is restarted.

1.1 configure network in k8s master node

Modify the / etc / sysconfig / network scripts / ifcfg-ens33 file to the following:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.124.16
NETMASK=255.255.255.0
GATEWAY=192.168.124.1
DNS1=192.168.124.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

After modifying the configuration file, you need to restart the network service for the configuration to take effect. The command to restart the network service is as follows:
service network restart

Note:

ifcfg-ens33 file configuration explanation:

IPADDR=192.168.124.16   #The ip address should be consistent with the network segment of your computer
NETMASK=255.255.255.0 #Subnet mask must be consistent with the network segment of your computer
GATEWAY=192.168.124.1 #Gateway, open cmd on your own computer, enter ipconfig /all to see
DNS1=192.168.124.1       #DNS, open cmd on your own computer, and enter ipconfig /all to see
1.2 configure network in k8s node node

Modify the / etc / sysconfig / network scripts / ifcfg-ens33 file to the following:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.124.26
NETMASK=255.255.255.0
GATEWAY=192.168.124.1
DNS1=192.168.124.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

After modifying the configuration file, you need to restart the network service for the configuration to take effect. The command to restart the network service is as follows:
service network restart

2. Install the basic software package and operate each node

yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release lrzsz openssh-server socat ipvsadm conntrack

3. Turn off the firewalld firewall and operate at each node. The default of centos7 system is firewalld firewall. Stop the firewalld firewall and disable the service

systemctl stop firewalld && systemctl disable firewalld

4. Install iptables and operate each node. If you are not used to using firewalld, you can install iptables. This step can not be done. According to your actual needs

4.1 install iptables

yum install iptables-services -y

4.2 disable iptables

service iptables stop && systemctl disable iptables

5. Time synchronization, operation of each node

5.1 time synchronization

ntpdate cn.pool.ntp.org

5.2 edit the scheduled task, and synchronize every hour
crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org

6. Turn off selinux and operate each node

Shut down selinux and set to shut down permanently. In this way, selinux will also be shut down when the machine is restarted

Modify the / etc/sysconfig/selinux file

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

After the above file is modified, you need to restart the virtual machine, which can be forced to restart:

reboot -f

7. Turn off the switch partition and operate each node

swapoff -a
#Permanently disable, open / etc/fstab to comment out the swap line.
sed -i 's/.*swap.*/#&/' /etc/fstab

8. Modify kernel parameters and node operations

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

9. Modify the host name

On k8s master (192.168.124.16):

hostnamectl set-hostname k8s-master

On k8s node (192.168.124.26):

hostnamectl set-hostname k8s-node

10. Configure the hosts file and operate each node

Add the following two lines to the / etc/hosts file:

192.168.124.16 k8s-master
192.168.124.26 k8s-node

11. Configure k8s master to k8s node login without password

Operate on k8s master

ssh-keygen -t rsa    
#Keep returning.
ssh-copy-id -i .ssh/id_rsa.pub root@192.168.124.26
#You need to enter the password above. Just enter the k8s node physical machine password

3, Install kubernetes 1.17.3

1. Modify the yum source and perform operations on each node

(1) Back up the original yum source

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

(2) Download Alibaba's yum source

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

(3) Generate new yum cache

yum makecache fast

(4) Configure the yum source required for k8s installation

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

(5) Clean up yum cache

yum clean all

(6) Generate new yum cache

yum makecache fast

(7) Update package

yum -y update

(8) Install package

yum -y install yum-utils device-mapper-persistent-data lvm2

(9) Add new software source

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2. Install docker19.03 and operate each node

2.1 view the supported docker versions

yum list docker-ce --showduplicates |sort -r

2.2 download version 19.03
yum install -y docker-ce-19*
systemctl enable docker && systemctl start docker
#Check the docker status. If the status is active (running), the docker is in normal operation
systemctl status docker  
2.3 modify docker configuration file and configure image accelerator
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

2.4 restart docker to make configuration effective

systemctl restart docker

3. Install kubernetes 1.17.3

3.1 install kubeadm and kubelet on k8s master and k8s node
yum install kubeadm-1.17.3 kubelet-1.17.3
systemctl enable kubelet
3.2 after uploading the image to k8s ﹣ master and k8s ﹣ node nodes, manually decompress the image. The image is on Baidu online disk. My image is downloaded from the official. You can use it at ease
docker load -i  kube-apiserver.tar.gz
docker load -i   kube-scheduler.tar.gz 
docker load -i   kube-controller-manager.tar.gz
docker load -i  pause.tar.gz 
docker load -i  cordns.tar.gz 
docker load -i  etcd.tar.gz
docker load -i  kube-proxy.tar.gz 
docker load -i cni.tar.gz
docker load -i calico-node.tar.gz

docker load -i  kubernetes-dashboard_1_10.tar.gz 
docker load -i  metrics-server-amd64_0_3_1.tar.gz
docker load -i  addon.tar.gz

Explain:

pause version is 3.1
 etcd version is 3.4.3          
The cordns version is 1.6.5 
cni version is 3.5.3 
The calico version is 3.5.3         
The versions of apiserver, scheduler, controller manager and Kube proxy are 1.17.3 
kubernetes dashboard version 1.10.1 
Metrics server version 0.3.1         
Addon Resizer version is 1.8.4

3.3 initialize k8s cluster at k8s master node

kubeadm init --kubernetes-version=v1.17.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 192.168.124.16

As shown below, the initialization is successful

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.124.16:6443 --token i9m8e5.z12dnlekjbmuebsk \
    --discovery-token-ca-cert-hash sha256:2dc931b4508137fbe1bcb93dc84b7332e7e874ec5862a9e8b8fff9f7c2b57621 

Note:

kubeadm join... This command needs to be remembered. Next, we need to enter this command in the node node node when we add k8s node to the cluster. Each time we execute this command, the result is different. Remember the result of our own execution, which will be used in the following

3.4 in the k8s master node, execute the following to have permission to operate k8s resources

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes 

Shown below

NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   2m13s   v1.17.3

3.5 add k8s node to k8s cluster and operate at node node

kubeadm join 192.168.124.16:6443 --token i9m8e5.z12dnlekjbmuebsk \
    --discovery-token-ca-cert-hash sha256:2dc931b4508137fbe1bcb93dc84b7332e7e874ec5862a9e8b8fff9f7c2b57621 

Note:

The above list of commands added to k8s node is generated during 3.3 initialization

3.6 view the cluster node status in k8s master node

kubectl get nodes is shown as follows

NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   3m48s   v1.17.3
k8s-node     NotReady   <none>   6s      v1.17.3

As can be seen above, the Status is not ready, because there is no network plug-in installed, such as calico or flannel

3.7 install calico network plug-in at k8s master node

Execute the following in the master node:
kubectl apply -f calico.yaml

cat calico.yaml

#Calico Version v3.5.3
#https://docs.projectcalico.org/v3.5/releases#v3.5.3
#This manifest includes the following component versions:
#calico/node:v3.5.3
calico/cni:v3.5.3
This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
#Typha is disabled.
  typha_service_name: "none"
  #Configure the Calico backend to use.
  calico_backend: "bird"

  #Configure the MTU to use
  veth_mtu: "1440"

  #The CNI network configuration to install on each node.  The special
  #values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.0",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
            "type": "host-local",
            "subnet": "usePodCidr"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

---

#This manifest installs the calico/node container, as well
#as the Calico CNI plugins and network config on
#each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        #This, along with the CriticalAddonsOnly toleration below,
        #marks the pod as a critical add-on, ensuring it gets
        #priority scheduling and that its resources are reserved
        #if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        #Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        #Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      #Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      #deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      initContainers:
        #This container installs the Calico CNI binaries
        #and CNI network config file on each node.
        - name: install-cni
          image: quay.io/calico/cni:v3.5.3
          command: ["/install-cni.sh"]
          env:
            #Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            #The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            #Set the hostname based on the k8s node name.
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            #CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            #Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
      containers:
        #Runs calico/node container on each Kubernetes node.  This
        #container programs network policy and routes on each
        #host.
        - name: calico-node
          image: quay.io/calico/node:v3.5.3
          env:
            #Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            #Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            #Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            #Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            #Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            #Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP_AUTODETECTION_METHOD
              value: "can-reach=192.168.124.56"
            #Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            #Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            #The default IPv4 pool to create on startup if none exists. Pod IPs will be
            #chosen from this range. Changing this value after installation will have
            #no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
            #Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            #Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            #Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            #Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
              host: localhost
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -bird-ready
              - -felix-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
      volumes:
        #Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        #Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
#Create all the CustomResourceDefinitions needed for
#Calico policy and networking mode.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
   name: felixconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: FelixConfiguration
    plural: felixconfigurations
    singular: felixconfiguration
---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgppeers.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPPeer
    plural: bgppeers
    singular: bgppeer

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hostendpoints.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: HostEndpoint
    plural: hostendpoints
    singular: hostendpoint

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: ClusterInformation
    plural: clusterinformations
    singular: clusterinformation

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkPolicy
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworksets.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkSet
    plural: globalnetworksets
    singular: globalnetworkset

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkPolicy
    plural: networkpolicies
    singular: networkpolicy
---

#Include a clusterrole for the calico-node DaemonSet,
#and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-node
rules:
  #The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      #Used to discover service IPs for advertisement.
      - watch
      - list
      #Used to discover Typhas.
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      #Needed for clearing NodeNetworkUnavailable flag.
      - patch
      #Calico stores some configuration information in node annotations.
      - update
  #Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  #Used by Calico for policy information.
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
    verbs:
      - list
      - watch
  #The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
  #Calico monitors various CRDs for config.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - clusterinformations
      - hostendpoints
    verbs:
      - get
      - list
      - watch
  #Calico must create and update some CRDs on startup.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
      - felixconfigurations
      - clusterinformations
    verbs:
      - create
      - update
  #Calico stores some configuration information on the node.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  #These permissions are only requried for upgrade from v2.6, and can
  #be removed after upgrade or on fresh installations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - bgpconfigurations
      - bgppeers
    verbs:
      - create
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system
---

Check whether calico is in running state in k8s master node

kubectl get pods -n kube-system

As shown below, it indicates that calico deployment is normal. If calico deployment is not successful, cordns will always be displayed in ContainerCreating

calico-node-rkklw                    1/1     Running   0          3m4s
calico-node-rnzfq                    1/1     Running   0          3m4s
coredns-6955765f44-jzm4k             1/1     Running   0          25m
coredns-6955765f44-mmbr7             1/1     Running   0          25m

View STATUS in k8s master node
kubectl get nodes

As shown below, the STATUS status is ready, indicating that the cluster is in a normal state

NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   9m52s   v1.17.3
k8s-node     Ready    <none>   6m10s   v1.17.3

4. Install kubernetes dashboard (web ui interface of kubernetes)

Operate at k8s master node
kubectl apply -f kubernetes-dashboard.yaml

cat kubernetes-dashboard.yaml

#Copyright 2017 The Kubernetes Authors.

#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.

#Configuration to deploy release version of the Dashboard UI compatible with
#Kubernetes 1.8.
#Example usage: kubectl create -f <this_file>

---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    # Allows editing resource and makes sure it is created first.
    addonmanager.kubernetes.io/mode: EnsureExists
  name: kubernetes-dashboard-key-holder
  namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
    name: cluster-watcher
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - 'get'
  - 'list'
- nonResourceURLs:
  - '*'
  verbs:
  - 'get'
  - 'list'
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
---
# ------------------- Dashboard Deployment ------------------- #
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
    name: dashboard
    namespace: kube-system
    annotations:
        kubernetes.io/ingress.class: traefik
spec:
    rules:
    -   host: dashboard.multi.io
        http:
            paths:
            -   backend:
                    serviceName: kubernetes-dashboard
                    servicePort: 443
                path: /

To see if dashboard was installed successfully:

kubectl get pods -n kube-system

As shown below, the dashboard installation is successful

kubernetes-dashboard-7898456f45-8v6pw 1/1 Running 0 61s

View the service s on the front end of the dashboard

kubectl get svc -n kube-system

kubernetes-dashboard NodePort 10.106.68.182 &lt;none&gt; 443:32505/TCP 12m

As you can see above, the service type is NodePort. You can access the kubernetes dashboard by visiting the k8s master node ip:32505 port. My environment needs to enter the following address

https://192.168.124.16:32505/

5. Install metrics monitoring plug-in
Operate at k8s master node
kubectl apply -f metrics.yaml
cat metrics.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - deployments
  verbs:
  - get
  - list
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metrics-server-config
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.3.1
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
      version: v0.3.1
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
        version: v0.3.1
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: k8s.gcr.io/addon-resizer:1.8.4
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --cpu=300m
          - --extra-cpu=20m
          - --memory=200Mi
          - --extra-memory=10Mi
          - --threshold=5
          - --deployment=metrics-server
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
      volumes:
        - name: metrics-server-config-volume
          configMap:
            name: metrics-server-config
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: https
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100

After all the above components are installed, kubectl get Pods - n Kube system to check whether the component installation is normal. The STATUS status is Running, indicating that the component is normal, as shown below

NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-rkklw                       1/1     Running   0          4h29m
calico-node-rnzfq                       1/1     Running   0          4h29m
coredns-6955765f44-jzm4k                1/1     Running   0          4h52m
coredns-6955765f44-mmbr7                1/1     Running   0          4h52m
etcd-k8s-master                         1/1     Running   0          4h52m
kube-apiserver-k8s-master               1/1     Running   0          4h52m
kube-controller-manager-k8s-master      1/1     Running   1          4h52m
kube-proxy-jch6r                        1/1     Running   0          4h52m
kube-proxy-pgncn                        1/1     Running   0          4h43m
kube-scheduler-k8s-master               1/1     Running   1          4h52m
kubernetes-dashboard-7898456f45-8v6pw   1/1     Running   0          177m
metrics-server-5cf9669fbf-bdl8z         2/2     Running   0          8m19s

In order to write this article, the author also takes great pains. The full text has reached 10000 words, and the content of the article is large. The knowledge summary is also perfect. As long as you follow the steps, you can build a set of k8s cluster. If you want to continue to learn the follow-up knowledge of k8s, you can get k8s free installation videos of each version. You can enter the technical exchange group to get HA in the following way~~

One sentence for you:
If you can improve a little every day, then every month, you can improve a lot every year. Keep going~~

Tags: Linux Kubernetes Docker yum network

Posted on Tue, 10 Mar 2020 01:53:26 -0400 by outpost