K8s is built on Linux one

I. Introduction

Kubernetes is the most popular technology. It has become the standard of PASS management platform in the open source community. At present, most of the article is to build kubernetes platform for X86 platform. Next, the author will build an open-source kubernetes platform on Linux one.
There are two main ways to build K8S platform,

  1. The first one is based on binary architecture, which can deepen the understanding of K8S services step by step.
  2. kubeadm, the official recommended automatic deployment tool
    This time, the official method of building Kubeadm is used. kubedm uses K8S's own services to K8S's own pod s. In addition, the basic services in advance are run in the way of system services.
    master node installation components:
    docker, kubelet and kubeadm run based on local system services
    Kube proxy is a dynamic pod that can be managed by k8s
    API server, Kube controller, etcd and to guan in pod
    node component
    docker and kubelet run based on local system service
    Kube proxy is a dynamic pod that can be managed by k8s
    flannel is a dynamic pod that can be managed by k8s

    Two, installation

    1. environment

System version IP address host name
ubuntu1~18.04.1 172.16.35.140 master
ubuntu1~18.04.1 woker-1

2. Install docker
Package for installing the foundation

apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

Add official key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add docker source

sudo add-apt-repository \
 "deb [arch=s390x] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) \
 stable"

Install docker

 apt-get update;apt-get install docker-ce

3. Install kubelet, kubeadm
Add source

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF  
apt-get update
apt-get install -y kubelet kubeadm kubec

4. Initializing environment with kubeadm
Before we do that, we need to make the following preparations

Initialize environment:
1. Host name based communication
2. Time synchronization
3. Firewall OFF
4,swapoff -a && sysctl -w vm.swappiness=0 && sysctl -w net.ipv4.ip_forward=1 \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

Check the basic docker image needed by kubeadm

root@master:/etc/apt# kubeadm config images list
W0321 08:51:12.828065   19587 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on 127.0.0.53:53: server misbehaving
W0321 08:51:12.828143   19587 version.go:102] falling back to the local client version: v1.17.4
W0321 08:51:12.828250   19587 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0321 08:51:12.828275   19587 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

We can see that we have listed the docker image s we need. Due to the indescribable factors, we cannot directly access k8s.gcr.io
So we need to download these images by ourselves. I have uploaded the images to my docker hub. You can pull them by yourself

 docker pull  erickshi/kube-apiserver-s390x:v1.17.4
 docker pull  erickshi/kube-scheduler-s390x:v1.17.4
 docker pull  erickshi/kube-controller-manager-s390x:v1.17.4 
 docker pull  erickshi/pause-s390x:3.1 
 docker pull  erickshi/coredns:s390x-1.6.5
 docker pull  erickshi/etcd:3.4.3-0
 docker pull  erickshi/pause:3.1

After downloading, we need to change it to the same name as we listed, because kubeadm will download it

 docker tag erickshi/kube-apiserver-s390x:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4
 docker tag erickshi/kube-scheduler-s390x:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4
 docker tag erickshi/kube-controller-manager-s390x:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4
docker tag erickshi/pause-s390x:3.1 k8s.gcr.io/pause:3.1
docker tag erickshi/etcd-s390x:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tags erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5
docker tag erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5

Now formally initialize the cluster

root@master:~#  kubeadm init --kubernetes-version=v1.17.4  --service-cidr=10.96.0.0/12
W0321 09:57:23.233367    9597 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0321 09:57:23.233401    9597 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.35.140]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.16.35.140 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.16.35.140 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0321 09:57:32.529825    9597 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0321 09:57:32.530693    9597 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.001699 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rp81u6.x7rky04rds2knxb8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.35.140:6443 --token rp81u6.x7rky04rds2knxb8 \
            --discovery-token-ca-cert-hash sha256:ff32332337f679859b4c34a888c42c963b86148f3ede24bf980a435183beb4be

It can be seen that the initialization is successful. We need to copy the user's authentication file to the corresponding location

 mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Let's look at the current environment

root@master:/etc/apt# kubectl  get node
NAME     STATUS     ROLES    AGE    VERSION
master   NotReady   master   159m   v1.17.4
root@master:/etc/apt# kubectl  get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

Let's look at the services entrusted to guan in k8s

root@master:/etc/apt# kubectl  get pod --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-gfjk2         0/1     Pending   0          159m
kube-system   coredns-6955765f44-l25vq         0/1     Pending   0          159m
kube-system   etcd-master                      1/1     Running   0          159m
kube-system   kube-apiserver-master            1/1     Running   0          159m
kube-system   kube-controller-manager-master   1/1     Running   0          159m
kube-system   kube-proxy-xfw6v                 1/1     Running   0          159m
kube-system   kube-scheduler-master            1/1     Running   0          159m

Install flanal network below
Download the flannel image first

wget https://github.com/coreos/flannel/releases/download/v0.12.0/flanneld-v0.12.0-s390x.docker
    docker load  < flanneld-v0.12.0-s390x.docker

  kubectl apply   -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

View the current node

root@master:~# kubectl  get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   2m31s   v1.17.4
root@master:~#

Tags: Kubernetes Docker kubelet sudo

Posted on Sat, 21 Mar 2020 12:07:54 -0400 by -Karl-