The new version of kubeadm has changed some parameters, the installation method has changed, you can refer to the new version installation method
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
By installing the kubeadm kubectl command and the kubelet service with the above repo, the kubelet service on the master is used to start the static pod, and all the components of the master are also static pods
- kubeadm init starts a Kubernetes master node and adds--image-repository to connect to its own repository, see kubeadm init --help
- kubeadm join starts a Kubernetes working node and joins it to the cluster
- kubeadm upgrade updates a Kubernetes cluster to a new version
- If kubeadm config uses v1.7.x or a lower version of kubeadm to initialize the cluster, you need to do some configuration of the cluster to use the kubeadm upgrade command
- kubeadm token manages tokens used by kubeadm join
- kubeadm reset restores any changes made to the host by kubeadm init or kubeadm join
- Kubeadm version prints kubeadm version
- kubeadm alpha previews a set of new features available for collection from the community
kubeadm init [flags]
Note that the configuration item kubernetesVersion or command line parameter in the configuration file--kubernetes-version affects the version of the image
install
master node
port Effect User 6443* apiserver all 2379-2380 etcd apiserver,etcd 10250 kubelet self,control,plane 10251 kube-scheduler self 10252 kube-controller-manager selfworker node
port Effect User 10250 kubelet self,control,plane 30000-32767 nodeport service** allAdjust Kernel Parameters
Vim/etc/sysctl.conf Add
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
Close firewall and selinux
systemctl disable firewalld systemctl stop firewalld systemctl stop iptables systemctl disable iptables setenforce 0
Then modify the current kernel state
sysctl -pAfter changing the yum source to the source at the beginning of the article
yum -y install docker kubelet kubeadm ebtables ethtool
daemon.json configuration:
{ "insecure-registries":["http://harbor.test.com"], "registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"] }
Add file cat/etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=systemd
It would be an error to start kubelet at this time
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Start docker first
systemctl daemon-reload
Then look at what mirrors we need:
[root@host5 kubernetes]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.6
This step is also not very good in the new version because there are no parameters such as print-default.
Because these mirrors are all abroad, they do not meet our installation requirements and we need to replace the installed mirror source.
Our husband makes a profile:
kubeadm config print-defaults --api-objects ClusterConfiguration >kubeadm.conf
In the configuration file:
imageRepository: k8s.gcr.io
Replace with your own private warehouse
imageRepository: docker.io/mirrorgooglecontainers
There are times when you need to modify the version of the parameter kubernetesVersion, which I don't need in my actual installation
Then run the command
kubeadm config images list --config kubeadm.conf kubeadm config images pull --config kubeadm.conf kubeadm init --config kubeadm.conf
View current configuration
kubeadm config view
The warehouse on the top default does not handle the mirror of coredns. I pull it and process it locally
docker pull coredns/coredns:1.2.6 docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6
So let's see
[root@host5 kubernetes]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/mirrorgooglecontainers/kube-proxy v1.13.0 8fa56d18961f 3 months ago 80.2 MB docker.io/mirrorgooglecontainers/kube-apiserver v1.13.0 f1ff9b7e3d6e 3 months ago 181 MB docker.io/mirrorgooglecontainers/kube-controller-manager v1.13.0 d82530ead066 3 months ago 146 MB docker.io/mirrorgooglecontainers/kube-scheduler v1.13.0 9508b7d8008d 3 months ago 79.6 MB docker.io/coredns/coredns 1.2.6 f59dcacceff4 4 months ago 40 MB docker.io/mirrorgooglecontainers/coredns 1.2.6 f59dcacceff4 4 months ago 40 MB docker.io/mirrorgooglecontainers/etcd 3.2.24 3cab8e1b9802 5 months ago 220 MB docker.io/mirrorgooglecontainers/pause 3.1 da86e6ba6ca1 14 months ago 742 kB
Then we run
kubeadm init --config kubeadm.conf
[addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #######Network also needs to run independently https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.11.90.45:6443 --token 2rau0q.1v7r64j0qnbw54ev --discovery-token-ca-cert-hash sha256:eb792e5e9f64eee49e890d8676c0a0561cb58a4b99892d22f57d911f0a3eb7f2
You can see that you need to execute
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Otherwise we will get an error when we execute the kubectl command when we access the default port 8080
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@host5 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
In the new version 1.14.1, the above configuration is not required, pack the image into your own private image library and execute it directly:
kubeadm init --image-repository harbor.test.com/k8snew
Then follow the instructions
As you can see below, two pod s are pending because I am not deploying network components and see above that the network needs to be run and installed separately
[root@host5 ~]# kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-9f7ddc475-kwmxg 0/1 Pending 0 45m kube-system pod/coredns-9f7ddc475-rjs8d 0/1 Pending 0 45m kube-system pod/etcd-host5 1/1 Running 0 44m kube-system pod/kube-apiserver-host5 1/1 Running 0 45m kube-system pod/kube-controller-manager-host5 1/1 Running 0 44m kube-system pod/kube-proxy-nnvsl 1/1 Running 0 45m kube-system pod/kube-scheduler-host5 1/1 Running 0 45m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 45m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 45m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 0/2 2 0 45m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-9f7ddc475 2 2 0 45m
Next we're going to deploy calico network components
As we can see from the above init prompt:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Installation of network components can view this page
We download or remotely execute this yaml file
kubectl apply -f calico.yml
..... image: harbor.test.com/k8s/cni:v3.6.0 ..... image: harbor.test.com/k8s/node:v3.6.0
If you can't download a foreign image, you can refer to my other article "Change Default Mirror Library to Specified".
The current version of related network components has been replaced with v3.7.2
Then we found the following:
kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7c69b4dd88-89jnl 1/1 Running 0 6m58s kube-system calico-node-7g7gn 1/1 Running 0 6m58s kube-system coredns-f7855ccdd-p8g58 1/1 Running 0 86m kube-system coredns-f7855ccdd-vkblw 1/1 Running 0 86m kube-system etcd-host5 1/1 Running 0 85m kube-system kube-apiserver-host5 1/1 Running 0 85m kube-system kube-controller-manager-host5 1/1 Running 0 85m kube-system kube-proxy-6zbzg 1/1 Running 0 86m kube-system kube-scheduler-host5 1/1 Running 0 85m
Found all pod s started properly
On node node
Also turn selinux off, adjust kernel parameters, etc.
Then execute
kubeadm join 10.11.90.45:6443 --token 05o0eh.6andmi1961xkybcu --discovery-token-ca-cert-hash sha256:6ebbf4aeca912cbcf1c4ec384721f1043714c3cec787c3d48c7845f95091a7b5
If you add a master node, add the --experimental-control-plane parameter
If you forget token, use:
kubeadm token create --print-join-command
journalctl -xe -u docker #See docker Log
journalctl -xe -u kubelet -f #View kubelet's log
dashboard installation
Download image and yaml files
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
The yaml file is as follows:
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: harbor.test.com/k8s/kubernetes-dashboard-amd64:v1.10.0 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30003 selector: k8s-app: kubernetes-dashboard
As above, access within svc is changed to nodeport, mapped to port 30003 of node
After kubectl create
Open the dashboard interface by directly accessing https://nodeip:30003
Tested to find that google browser is not working, firefox is working
But token is required to log in at this time
Next we generate token
admin-token.yml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
After kubectl create -f
kubectl get secret -n kube-system |grep admin|awk '' kubectl describe secret admin-token-8hn52 -n kube-system|grep '^token'|awk '' #Note that the secret here is replaced by the actual
Write down this generated token to log in
After restarting the server, start the kubelet service, then start all containers
swapoff -a
docker start $(docker ps -a | awk '{ print $1}' | tail -n +2)
Reference resources
https://github.com/gjmzj/kubeasz
heapster installation
Mirroring uses image s installed by heapster in another article
There are four yaml files (omitted)
grafana.yaml only needs to modify the following image
... #image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4 image: harbor.test.com/rongruixue/heapster_grafana:latest ...
heapster-rbac.yaml does not need to be modified
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system
heapster.yaml needs to add volumes to mount and modify image s as well as the following
spec: serviceAccountName: heapster containers: - name: heapster #image: k8s.gcr.io/heapster-amd64:v1.5.4 image: harbor.test.com/rongruixue/heapster:latest volumeMounts: - mountPath: /srv/kubernetes name: auth - mountPath: /root/.kube name: config imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes:https://kubernetes.default?inClusterConfig=false&insecure=true&auth=/root/.kube/config - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 volumes: - name: auth hostPath: path: /srv/kubernetes - name: config hostPath: path: /root/.kube
Explain:
inClusterConfig=false: Do not use kube config information in service accounts;
insecure=true: You stole a lazy person here: Choose to trust the server certificate sent by kube-apiserver, that is, not to verify it;
auth=/root/.kube/config: This is the key!When service account is not used, we use the information in the auth file to correspond to the kube-apiserver check.
Otherwise, you cannot connect to apiserver's 6443
Note that after installing v.1.14.1 in the new version, heapster with this version will prompt you not to be authenticated and will use it later
registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64 v1.5.4 registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64 v1.5.2 registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64 v5.0.4
influxdb.yaml only needs to modify the image
spec: containers: - name: influxdb #image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 image: harbor.test.com/rongruixue/influxdb:latest
The heapster pod error was found after executing kubectl create-f
The--read-only-port=10255 parameter is missing, if it is not present, then heapster will fail to connect to port 10255 of the node when deploying dashboard
E0320 14:07:05.008856 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://10.11.90.45:10255/stats/container/": Post http://10.11.90.45:10255/stats/container/: dial tcp 10.11.90.45:10255: getsockopt: connection refused
After checking, we found that none of the kubelet s have opened port 10255. The instructions for this port are as follows:
parameter explain Default value –address Address where the kubelet service listens 0.0.0.0 –port The port on which the kubelet service listens 10250 –healthz-port Ports for Health Check Services 10248 –read-only-port Read-only port, directly accessible without authentication and authorization 10255So we want to open this read-only port
When we start the kubelet service on each machine
The startup configuration for this port needs to be added to the configuration file:
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--read-only-port=10255"
In this way, we can install successfully:
[root@host5 heapster]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% host4 60m 6% 536Mi 28% host5 181m 9% 1200Mi 64%