Kubernetes(K8S) installation tutorial (practice)

reference resources:  https://www.cnblogs.com/Sunzz/p/15184167.html

1, Installation environment description

hardware requirements

Memory: 2GB or more RAM

CPU:   2-core CPU or more

Hard disk:   30GB or more

This environmental description:

Operating system: CentOS 7.6

master:  192.168.7.111

node01:  192.168.7.112

node02:  192.168.7.113

2, Environmental preparation

Note: all K8S server nodes need to execute

1. Turn off the firewall and selinux

Turn off firewall

systemctl stop firewalld && systemctl disable firewalld && iptables -F

Close selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0 

2. Close the swap partition

  Temporarily Closed  

swapoff -a

Permanently close swap

sed -ri 's/.*swap.*/#&/' /etc/fstab 

3. Modify the hosts file

Set the host name (it is OK not to set it, but make sure that the host names are different)  

  master node

hostnamectl set-hostname k8s-master   # This is an example where you can customize the host name

node1 node  

hostnamectl set-hostname k8s-node1   # This is an example where you can customize the host name

  node2 node  

  hostnamectl set-hostname k8s-node2   # This is an example where you can customize the host name

Modify the local hosts file. Each node needs to be equipped with hosts

vi /etc/hosts add the following

192.168.7.111 k8s-master
192.168.7.112 k8s-node1
192.168.7.113 k8s-node2 

  4. Modify kernel parameters

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

sysctl --system

5. Load ip_vs Kernel Module

If Kube proxy mode is ip_vs must be loaded. iptables is used in this paper  

modprobe ip_vs

modprobe ip_vs_rr

modprobe ip_vs_wrr

modprobe ip_vs_sh

modprobe nf_conntrack_ipv4 

Set the next boot auto load  

cat > /etc/modules-load.d/ip_vs.conf << EOF

ip_vs

ip_vs_rr

ip_vs_wrr

ip_vs_sh

nf_conntrack_ipv4

EOF

3, Install docker  

1. Configure yum source (alicloud source is used here)

yum install wget -y

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2. Install docker  

Install the specified version of docker  

List all docker versions:

yum list docker-ce.x86_64 --showduplicates | sort

Select a version you want to install. Install docker 19.03.9 here  

yum -y install docker-ce-19.03.9-3.el7 docker-ce-cli-19.03.9-3.el7

3. Edit docker configuration file  

mkdir /etc/docker/

cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://gqs7xcfd.mirror.aliyuncs.com","https://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF 

4. Start docker service

systemctl daemon-reload && systemctl enable docker && systemctl start docker 

4, Install kubedm, kubelet, and kubectl

1. Configure yum source (alicloud source is used here)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. Install the specified version of kubedm, kubelet and kubectl

yum install -y kubelet-1.18.8 kubeadm-1.18.8 kubectl-1.18.8 

You can also specify other versions. Just specify the corresponding version, such as installing 1.16.9  

yum install -y kubelet-1.15.7 kubeadm-1.15.7 kubectl-1.15.7 

List all versions  

yum list kubelet --showduplicates 

3. Set auto start  

systemctl enable kubelet 

5, Deploy Kubernetes Master node

1. Initialize the master node, and execute on the master node:

 kubeadm init \
  --kubernetes-version 1.18.8 \
  --apiserver-advertise-address=0.0.0.0 \
  --service-cidr=10.96.0.0/16 \
  --pod-network-cidr=10.245.0.0/16 \
  --image-repository registry.aliyuncs.com/google_containers 

Note:

The version must be consistent with kubelet, kubead and kubectl installed above

Parameter description  

--kubernetes-version v1.18.8    # Specify version

--apiserver-advertise-address    # In order to announce the IP to other components, it should generally be the IP address of the master node

--service-cidr     # The specified service network cannot conflict with the node network

--pod-network-cidr     # The specified pod network cannot conflict with node network and service network

--image-repository registry.aliyuncs.com/google_containers     # Specify the image source. Since the default pull image address k8s.gcr.io cannot be accessed domestically, specify the Alibaba cloud image warehouse address here. If the k8s version is relatively new, alicloud may not have a corresponding image, so you need to obtain images from other places.

--control-plane-endpoint    # The flag should be set to the address or DNS and port of the load balancer (optional)  

2. Wait for the image to be pulled  

You can also pull images for each node in advance to view the required image commands:  

kubeadm --kubernetes-version 1.18.8 config images list 

After the image is pulled successfully, the cluster will continue to be initialized. After the initialization is completed, you will see the following information, which will be used after keeping the output of the last two lines;

3. Configure kubectl

The three commands output after successful initialization;

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

4. View node information  

kubectl get nodes 

6, node join the cluster  

Each node node should also perform two, three and four steps before joining the cluster

1.node1 and node2 join the cluster and the node node executes  

kubeadm join 192.168.7.111:6443 --token 1quyaw.xa7yel3xla129kfw \
    --discovery-token-ca-cert-hash sha256:470410e1180b119ebe8ee3ae2842e7a4a852e590896306ec0dab26b168d99197 

2. View cluster nodes on the master node

kubectl get nodes 

You can see that the STATUS is NotReady, which is caused by the network plug-in. Wait until the network plug-in is installed  

7, Install plug-ins  

1. Install flannel  

Download yaml file from official website

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

You can also copy directly from here:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.245.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

  Change the network configuration in line 128 to be consistent with pod network CIDR

  Then execute the yaml file:

kubectl apply -f kube-flannel.yaml 

2. View flannel deployment results  

kubectl -n kube-system get pods -o wide 

3. View the status of each node  

kubectl get nodes 

4. Modify the cluster Kube proxy mode to iptables  

Because k8s 1.18 requires a higher kernel version, the core DNS cannot be parsed when using ipvs mode in kernel deployment 1.18.8 of 3.10. Shares here adopt iptables mode. If your service kernel starts from 4 +, iptables and ipvs can be used.

kubectl get cm kube-proxy -n kube-system -o yaml | sed 's/mode: ""/mode: "iptables"/' | kubectl apply -f -
kubectl -n kube-system rollout restart  daemonsets.apps  kube-proxy
kubectl -n kube-system rollout restart  daemonsets.apps  kube-flannel-ds

8, Deploy busybox to test the network conditions of the cluster  

reference resources:  https://www.cnblogs.com/Sunzz/p/15184167.html

Tags: Kubernetes

Posted on Sun, 28 Nov 2021 02:37:46 -0500 by bradjinc