Installing kubernetes from kubeadm

This article mainly explains the installation of kubentes under centos7 ...
1.1 installation via minikub (official minikub)
2.1 installation of kubeadm
2.2 install kubernetes cluster of single primary node
2.3 high availability kubernetes cluster installation

This article mainly explains the installation of kubentes under centos7

1 stand alone installation

1.1 installation via minikub (official minikub)

This section mainly introduces the installation of a local stand-alone kubenetes through minikube tool

1.1.1 minikube installation

If you need to install it in a virtual machine, you need to first install the virtual machine software, such as VirtualBox KVM Etc,

This article is directly installed in the computer, so it does not depend on the virtual machine. If it is installed directly, some minikub commands are not supported, such as minikub docker env commands.

minikube installation is simple, with only one executable

Execute the following command

[root@k8s-1 ~]# curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 [root@k8s-1 ~]# chmod +x minikube [root@k8s-1 ~]# sudo cp minikube /usr/local/bin [root@k8s-1 ~]# rm minikube

Check whether the installation is successful

[root@k8s-1 ~]# minikube version minikube version: v1.0.1

minikube common commands:

  • Minikub version, check the minikub version
  • minikube start
  • minikube ssh, ssh to virtual machine
  • Minikub logs, showing the log of minikub
  • Minikub dashboard, start minikub dashboard
  • minikube ip, showing the virtual machine address
  • minikube stop, stop virtual machine
  • minikube delete, delete virtual machine

1.1.2 kubenetes installation

After installing minikube, execute the following command

[root@k8s-1 ~]# minikube start --vm-driver=none

This will download the iso file first. I always get stuck when I download half of it from linux. I download it directly and put it in the / root/.minikube/cache/iso / directory.

Sometimes there are errors in downloading kubedm and Downloading kubelet when executing tasks, as follows:,

[root@k8s-1 ~]# minikube start --vm-driver=none o minikube v1.0.1 on linux (amd64) i Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one. : Restarting existing none VM for "minikube" ... : Waiting for SSH access ... - "minikube" IP address is 192.168.110.145 - Configuring Docker as the container runtime ... - Version of container runtime is 18.09.6 - Preparing Kubernetes environment ... X Unable to load cached images: loading cached images: loading image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /root/.minikube/cache/images/gcr.io/s-minikube/storage-provisioner_v1.8.1: no such file or directory @ Downloading kubeadm v1.14.1 @ Downloading kubelet v1.14.1 Failed to update cluster X Error: [DOWNLOAD_RESET_BY_PEER] downloading binaries: downloading kubelet: Error downloading kubelet v1.14.1:

It should be that google can't be downloaded from the Internet. I don't know why it can be downloaded to the local place first according to the following implementation on the Internet. It's estimated that minikub is searching the running directory first and can't be downloaded if there is one. I estimate that it can be downloaded manually and copied to / root /. Minikub / cache / v1.14.1 /.

[root@k8s-1 ~]# curl -Lo kubeadm http://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm [root@k8s-1 ~]# curl -Lo kubelet http://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubelet [root@k8s-1 ~]# minikube start --vm-driver=none

If there is no proxy server, it is still unable to connect to the Internet. If docker is unable to pull images from the google official website, the following content appears.

X Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: running command: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: exit status 1 : Relaunching Kubernetes v1.14.1 using kubeadm ... : Waiting for pods: apiserver ! Error restarting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new

One way is to configure the proxy server, specify the proxy address through -- docker Env, which I haven't tried, and execute the command similar to the following

[root@k8s-1 ~]# minikube start --vm-driver=none --docker-env HTTP_PROXY=http://192.168.1.102:1080 --docker-env HTTPS_PROXY=https://192.168.1.102:1080

Another way is to modify the tag mode

View all dependent images with the following command

[root@k8s-1 ~]# kubeadm config images list I0515 18:43:31.317493 7874 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: unexpected EOF I0515 18:43:31.317592 7874 version.go:97] falling back to the local client version: v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1

It can be seen that they are k8s gcr.io The image under can only be downloaded to alicloud or the official mirrorgooglecontainers / if it is not accessible. The mirrorgooglecontainers directory is the official mapping of docker to google images.

Finally, modify and download the image tag through the following commands

[root@k8s-1 ~]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1

Execute again

[root@k8s-1 ~]# minikube start --vm-driver=none

If it is still an error of the upper type, execute delete first, and then start

[root@k8s-1 ~]# minikube delete [root@k8s-1 ~]# minikube start --vm-driver=none

After success, the following will appear:

Verifying component health ..... > Configuring local host environment ... ! The 'none' driver provides limited isolation and may reduce system security and reliability. ! For more information, see: - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md ! kubectl and minikube configuration will be stored in /root ! To use kubectl or minikube commands as your own user, you may ! need to relocate them. For example, to overwrite your own settings: - sudo mv /root/.kube /root/.minikube $HOME - sudo chown -R $USER $HOME/.kube $HOME/.minikube i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true + kubectl is now configured to use "minikube" = Done! Thank you for using minikube!

If kubectl is installed, you can also execute the following command to view the cluster information

[root@k8s-1 ~]# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority: /root/.minikube/ca.crt server: https://192.168.110.145:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /root/.minikube/client.crt client-key: /root/.minikube/client.key [root@k8s-1 ~]# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube minikube minikube [root@k8s-1 ~]# kubectl cluster-info Kubernetes master is running at https://192.168.110.145:8443 KubeDNS is running at https://192.168.110.145:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 14m v1.14.1

Problems can be solved by

minikube logs

View. When I view the log, I will prompt k8s gcr.io/kube -addon- manager:v9.0 Failed.

1.1.3 kubenetes running docker image

This section uses a node to run js program to show how to run docker image. This is an example in the official tutorial

(1) Programming

Save this code in a folder called hellonode, filename server.js :

server.js

var http = require('http'); var handleRequest = function(request, response) { console.log('Received request for URL: ' + request.url); response.writeHead(200); response.end('Hello World!'); }; var www = http.createServer(handleRequest); www.listen(8080);

(2) create Docker container image

Create a file named Dockerfile in the hellonode folder, as follows

Dockerfile

FROM node:6.9.2 EXPOSE 8080 COPY server.js . CMD node server.js

Create docker image

docker build -t hello-node:v1 .

You can view the created images through docker images

(3) Create Deployment

Kubernetes Pod The Pod in this tutorial has only one container. Kubernetes Deployment Is to check the health of the Pod, if it terminates, restart a Pod container, and Deployment manages the creation and extension of the Pod.

Use the kubectl run command to create a Deployment to manage the pod. Pod according to hello-node:v1Docker To run a container image:

kubectl run hello-node --image=hello-node:v1 --port=8080

To view the Deployment:

kubectl get deployments

To view Pod:

kubectl get pods

(4) Create Service

By default, this Pod can only be accessed through the internal IP of the Kubernetes cluster. For the Hello node container to be accessed from outside the Kubernetes virtual network, Kubernetes is required Service Exposure to Pod.

We can use the kubectl expose command to expose the Pod to the external environment:

kubectl expose deployment hello-node --type=LoadBalancer

To view the Service you just created:

[root@k8s-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.102.14.136 <pending> 8080:32075/TCP 11s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h

You can see that port 8080 is mapped to port 32075 of the host, and you can see the content by opening the browser

(5) Delete

kubectl delete service hello-node kubectl delete deployment hello-node
2 cluster installation

2.1 installation of kubeadm

2.1.1 installation preparation, Official website introduction

(1) Make sure Unique hostname, MAC address, and product_uuid for every node

View hostname, mac, and product with the following command_ Whether UUID is unique

[root@k8s-2 ~]# hostname [root@k8s-2 ~]# ifconfig -a [root@k8s-2 ~]# sudo cat /sys/class/dmi/id/product_uuid

(2) Turn off firewall

kubernetes needs to bind many ports, so it needs to develop ports. Here, shut down the firewall directly, and remove the boot.

[root@k8s-2 ~]# systemctl stop firewalld.service [root@k8s-2 ~]# systemctl disable firewalld.service

(3) Turn off selinux

Temporarily Closed

setenforce 0

Permanent closure

vi /etc/selinux/config

Change SELinux = forcing to SELINUX=disabled, and restart is required to take effect after setting

The shutdown command given on the official website is as follows:

setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

(4) Disable Swap

Kubernetes 1.8 begins to require that the system's Swap be shut down

swapoff -a yes | cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab

(5) Set time zone and time synchronization

Set time zone

[root@k8s-1 ~]# timedatectl set-timezone Asia/Shanghai

time synchronization

Through the chrony

install

[root@k8s-3 ~]# yum -y install chrony

to configure

[root@k8s-3 ~]# vi /etc/chrony.conf

Change the server to its own

Start chrony and add boot entry

[root@k8s-1 ~]# systemctl start chronyd [root@k8s-1 ~]# systemctl enable chronyd

(6) sysctl configuration

cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system

(7) Ensure br_netfilter module value

You can use the command to check whether to load

lsmod | grep br_netfilter

If it is not loaded, load it with the following command

modprobe br_netfilter

2.1.2 installation of kubeadm and kubelet

Next, install kubedm and kubelet in each node. kubectl is only installed in the application node. It is a kubernetes command-line client

Add yum source

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum makecache fast yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Start and add kubelet

systemctl enable --now kubelet

2.1.3 kubeadm command

It depends on Official website

(1)init

The init command initializes the master. The init command mainly performs the following tasks:

preflight Run pre-flight checks kubelet-start Writes kubelet settings and (re)starts the kubelet certs Certificate generation /ca Generates the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generates the certificate for serving the Kubernetes API /apiserver-kubelet-client Generates the Client certificate for the API server to connect to kubelet /front-proxy-ca Generates the self-signed CA to provision identities for front proxy /front-proxy-client Generates the client for the front proxy /etcd-ca Generates the self-signed CA to provision identities for etcd /etcd-server Generates the certificate for serving etcd /apiserver-etcd-client Generates the client apiserver uses to access etcd /etcd-peer Generates the credentials for etcd nodes to communicate with each other /etcd-healthcheck-client Generates the client certificate for liveness probes to healtcheck etcd /sa Generates a private key for signing service account tokens along with its public key kubeconfig Generates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file /admin Generates a kubeconfig file for the admin to use and for kubeadm itself /kubelet Generates a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes /controller-manager Generates a kubeconfig file for the controller manager to use /scheduler Generates a kubeconfig file for the scheduler to use control-plane Generates all static Pod manifest files necessary to establish the control plane /apiserver Generates the kube-apiserver static Pod manifest /controller-manager Generates the kube-controller-manager static Pod manifest /scheduler Generates the kube-scheduler static Pod manifest etcd Generates static Pod manifest file for local etcd. /local Generates the static Pod manifest file for a local, single-node local etcd instance. upload-config Uploads the kubeadm and kubelet configuration to a ConfigMap /kubeadm Uploads the kubeadm ClusterConfiguration to a ConfigMap /kubelet Uploads the kubelet component config to a ConfigMap upload-certs Upload certificates to kubeadm-certs mark-control-plane Mark a node as a control-plane bootstrap-token Generates bootstrap tokens used to join a node to a cluster addon Installs required addons for passing Conformance tests /coredns Installs the CoreDNS addon to a Kubernetes cluster /kube-proxy Installs the kube-proxy addon to a Kubernetes cluster

(2)config

(3)join

Command format

kubeadm join [api-server-endpoint] [flags]

Task execution phase

preflight Run join pre-flight checks control-plane-prepare Prepares the machine for serving a control plane. /download-certs [EXPERIMENTAL] Downloads certificates shared among control-plane nodes from the kubeadm-certs Secret /certs Generates the certificates for the new control plane components /kubeconfig Generates the kubeconfig for the new control plane components /control-plane Generates the manifests for the new control plane components kubelet-start Writes kubelet settings, certificates and (re)starts the kubelet control-plane-join Joins a machine as a control plane instance /etcd Add a new local etcd member /update-status Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap /mark-control-plane Mark a node as a control-plane

2.2 install kubernetes cluster of single primary node

2.2.1 verify whether the dependent image is downloadable

[root@k8s-2 ~]# kubeadm config images pull

For domestic users, google can't access it, and there will be the following problems

I0521 14:48:48.008225 30022 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0521 14:48:48.008392 30022 version.go:97] falling back to the local client version: v1.14.2

Solution 1:

[root@k8s-2 ~]# kubeadm config images list I0521 14:49:20.089689 30065 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0521 14:49:20.089816 30065 version.go:97] falling back to the local client version: v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1

According to the output dependent image, go to the docker official website (mirrorgooglecontainers /) or Alibaba cloud( registry.aliyuncs.com/google_containers /) download the image, and then click tag

This paper adopts this way.

Solution 2: check the official website Using kubeadm init with a configuration file

Using kubeadm configuration file, you can quickly deploy the intranet by specifying the docker warehouse address in the configuration file

[root@k8s-1 ~]# kubeadm config print init-defaults

Many configurations can be queried, such as kubernetesVersion, imageRepository and other parameters.

By command

root@k8s-1 ~]# kubeadm config print init-defaults >kubeadm.conf

Output to kubeadm.conf Then modify the imageRepository address to the address on the official website of docker or Alibaba cloud.

After that, it is initialized by configuration parameters

kubeadm config images list --config kubeadm.conf kubeadm config images pull --config kubeadm.conf kubeadm init --config kubeadm.conf

Note: in general, when executing the init command, we may also need to specify parameters such as advertiseAddress and -- pod network CIDR, but because we use kubeadm.conf The configuration file is used for initialization, and other parameters cannot be specified on the command line, so we need to kubeadm.conf To set. For example: we modify kubeadm.conf The advertise address parameter corresponding to the -- apserver-advertise-address parameter in. --The pod network CID parameter corresponds to podSubnet.

I haven't tried this way.

2.2.2 initializing the master node

Execute initialization command

[root@k8s-1 ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=all [init] Using Kubernetes version: v1.14.1 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 0.030359 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node k8s-1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 3n83pz.3uw3bl7w69ddff5d [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.110.145:6443 --token 3n83pz.3uw3bl7w69ddff5d \ --discovery-token-ca-cert-hash sha256:42128c8f226d03a0c72596a242c595f824b50db5de2eb3197bd383d0dddbc06d

Add kubectl connection configuration

For all users

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

For root user (this restart is not good, or the above is reliable)

export KUBECONFIG=/etc/kubernetes/admin.conf

2.2.3 add pod network

In order to communicate between pods, network plug-ins need to be installed

Your Pod network must not overlap any host network as this may cause problems. If you find a conflict between the preferred Pod network of the network plug-in and some host networks, you should consider using the appropriate CIDR replacement and -- Pod network CIDR during kubeadm init, or replace it in YAML of the network plug-in.

This paper chooses calico network

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

This is to check that the coreDNS status is running through the following command

[root@k8s-1 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-5v6hj 2/2 Running 0 96s kube-system coredns-fb8b8dccf-v4v78 1/1 Running 0 115m kube-system coredns-fb8b8dccf-x5dg7 1/1 Running 0 115m kube-system etcd-k8s-1 1/1 Running 0 120m kube-system kube-apiserver-k8s-1 1/1 Running 0 120m kube-system kube-controller-manager-k8s-1 1/1 Running 0 120m kube-system kube-proxy-65fs9 1/1 Running 0 115m kube-system kube-scheduler-k8s-1 1/1 Running 0 120m

Modify the Pod network address:

It is found that the value of kubeadm init -- pod network CIDR is 192.168.0.0/16, which conflicts with the ip address of the host. The host is in the 192.168.110 network segment, so the value of pod network CIDR needs to be modified

Download and modify first calico.yaml The network address of is 192.168.1.0/24, and then

kubectl apply -f calico.yaml

After that, the value of pod network CIDR and caliao.yaml Values in correspond to

[root@k8s-1 ~]# kubeadm config upload from-flags --pod-network-cidr=192.168.1.0/24

Configuration information can be viewed through the command

[root@k8s-1 ~]# kubeadm config view

Note: it seems that this change can't work, because I can only start at the master node when installing dashboard after the change. If I start at the nodes node, I can't find the apiserver address, and the address 192.168.1.0/24 doesn't work. It's not successful to create a dashboard container after kubeadm init is re used. The last address used is 10.244.0.0/16.

2.2.4 main node isolation control

By default, your cluster does not schedule pods on the primary server for security reasons. If you want to be able to arrange pod on the primary server, for example, for a stand-alone Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

Output content, almost like the following

node "test-01" untainted taint "node-role.kubernetes.io/master:" not found taint "node-role.kubernetes.io/master:" not found

2.2.5 add working nodes

The work node is where the container runs. You need to run the following command on each node to make the node join the cluster

Output according to kubeadm init command

As shown in the above output:

Then you can join any number of worker nodes by running the following on each as root: //Execute the following command on each node: kubeadm join 192.168.110.145:6443 --token 3n83pz.3uw3bl7w69ddff5d \ --discovery-token-ca-cert-hash sha256:42128c8f226d03a0c72596a242c595f824b50db5de2eb3197bd383d0dddbc06d

If you don't know the token, you can execute it in the master node

kubeadm token create

For those who do not know the discovery token CA cert hash

Execute on the master node

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

See the added node through the command in the master node

[root@k8s-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-1 Ready master 145m v1.14.2 k8s-2 NotReady <none> 9m23s v1.14.2 k8s-3 NotReady <none> 6m42s v1.14.2 k8s-4 NotReady <none> 6m47s v1.14.2

Status is NotReady

Viewing logs from node

[root@k8s-4 ~]# journalctl -f -u kubelet -- Logs begin at Sun 2019-05-05 15:27:19 CST. -- May 21 17:17:36 k8s-4 kubelet[7247]: E0521 17:17:36.776015 7247 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Can you see the problem of mirroring or not? Install it on all slave nodes through tag mode pause:3.1 , and the Kube proxy image.

2.2.6 log viewing method

Multiple ways to view logs, which is important

tail -f /var/log/messages

/var/log/messages is all the log information of this node. When there is an error in init or join, you can check it to find it

journalctl --unit=kubelet -n 100 --no-pager

Output the message of the last 100 lines of a service

journalctl -f -u kubelet

Output the message of a service, but pay attention to the time

kubectl describe pods coredns-123344 -n kube-system

Output the details of a pod. Kube system is in the scope of namespace,

kubectl log coredns-123344 -n kube-system

When it is determined that there is a problem with this node, the error message will be output

2.2.7 cancel the installation (translated from the official website)

To undo what kubeadm does, first empty the node and make sure it is empty before shutting down.

Log in to the master node and allow

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name>

Then, on the node to be removed, reset all kubeadm installation states:

kubeadm reset

The reset process does not reset or clear iptables rules or IPVS tables. If you want to reset iptables, you must manually:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If you want to reset the IPVS table, you must run the following command:

ipvsadm -C

2.2.8 deploy dashboard

The dashboard is a web interface of kubernetes, through which container applications can be deployed in kubernetes machine to troubleshoot the clustered resource management.

(1) Install dashboard

In full accordance with Official website The steps of cannot be installed successfully. You need to make some changes.

First of all, the pods of the dashboard application must run on the master node. At the beginning, when I run on the nodes, an error will be reported. The APIserver cannot connect. The error is as follows. You can't specify the address of -- APIserver host either.

Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://192.168.10.144:6443/version: dial tcp 192.168.110.145:6443: i/o timeout

Download kubernetes first- dashboard.yaml file

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

Modify kubernetes later- dashboard.yaml The contents of the document are as follows:

Modify the template information under the Dashboard Deployment section to the following: add nodeName to deploy pods on the master node, and modify the image to replace google with alicloud

template: metadata: labels: k8s-app: kubernetes-dashboard spec: nodeName: k8s-1 containers: - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 ports: - containerPort: 8443 protocol: TCP args:

Modify the spec under the Dashboard Service to the following: add the type as nodeport, add the nodeport as 30001, so that you can access the dashboard through the external IP address, or you can only access it through the machine inside the kubernetes cluster.

spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001

Execute command after modification

[root@k8s-1 ~]# kubectl create -f kubernetes-dashboard.yaml //Output: secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created

Then execute the command to check whether the startup is successful

[root@k8s-1 ~]# kubectl -n kube-system get pods/deployments

(2) Create user

install Official website Step by step

Create file dashboard-adminuser.yaml , enter the following in the file:

apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system

Note that only ServiceAccount and ClusterRoleBinding are created in the above file. The reason why roles are so beautiful and common is that the cluster created by kops or kubeadm tool, clusterrole admin role, has been created in the cluster and can be used directly

Then execute the following command;

kubectl apply -f dashboard-adminuser.yaml

Get token Mode 1:

Obtain the token for login through the following command

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '')

Copy the output token to the web and log in.

Get token Mode 2:

Execute command:

[root@k8s-1 ~]# kubectl -n kube-system get secret

Execute the following description according to the corresponding secret name to obtain the token. Take the secret name admin user token 2wrxj as an example:

[root@k8s-1 ~]# kubectl -n kube-system describe secret admin-user-token-2wrxj

Copy the output token to the web and log in.

2.3 high availability kubernetes cluster installation

To be added

3 Cluster Upgrade

3.1 query the upgradeable version by the following command

[root@k8s-1 ~]# kubeadm upgrade plan Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 4 x v1.15.3 v1.15.12 Upgrade to the latest version in the v1.15 series: COMPONENT CURRENT AVAILABLE API Server v1.15.3 v1.15.12 Controller Manager v1.15.3 v1.15.12 Scheduler v1.15.3 v1.15.12 Kube Proxy v1.15.3 v1.15.12 CoreDNS 1.3.1 1.3.1 Etcd 3.3.10 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.15.12 Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.12.

The results are as follows. Before upgrading kubernetes, you need to upgrade kubeadm

3.2 upgrade kubeadm

Install kubeadm, kubelet, and kubectl in the master node; install kubeadm and kubelet in other nodes

[root@k8s-1 ~]# yum install -y kubeadm-1.15.12-0 kubelet-1.15.12-0 kubectl-1.15.12-0

3.3 after the installation of kubeadm is completed, execute the following command to customize the configuration information, which is mainly to modify the images warehouse manager

[root@k8s-1 ~]# kubeadm config print init-defaults > kubeadm-cof.yaml [root@k8s-1 ~]# vi kubeadm-cof.yaml

Add Alibaba cloud warehouse and modify the contents of imagerepository: registry.aliyuncs.com/google_ containers

3.4 execute upgrade command

master node execution

[root@k8s-1 ~]# kubeadm upgrade apply v1.15.12 //Last few lines results [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

Other node execution

kubeadm upgrade node

3.4 all nodes execute the following commands

[root@k8s-2 ~]# systemctl daemon-reload [root@k8s-2 ~]# systemctl restart kubelet

3.5 view the latest version information

[root@k8s-1 ~]# kubectl get nodes [root@k8s-1 ~]# kubectl version

16 June 2020, 00:31 | Views: 4279

Add new comment

For adding a comment, please log in
or create account

0 comments