describe
KubeSphere ®️ It is a distributed, multi tenant, multi cluster, enterprise class open source container platform based on Kubernetes. It has strong and perfect network and storage capabilities, and provides perfect multi cluster management, CI / CD, micro service governance, application management and other functions through minimal human-computer interaction, so as to help enterprises quickly build and manage on heterogeneous infrastructure such as cloud, virtualization and physical machines Deploy and operate container architecture to realize agile development and full life cycle management of applications.
Official website: https://kubesphere.qingcloud.com/ ,
Kubernetes installation tutorial: https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/
Environmental preparation
Prepare three servers and build a master-slave cluster. The configuration is as follows: centos system
The server | node | Host ip | to configure |
---|---|---|---|
Alibaba cloud | master | 8.134.124.163 : 172.24.80.197 | 4-core 8G |
Alibaba cloud | Node1 | 8.134.120.17: 172.24.80.1 | 8-core 16G |
Alibaba cloud | Node2 | 8.134.114.33 : 172.24.80.196 | 8-core 16G |
The current server can be built and tested by using cloud manufacturers and pay per volume.
Install Docker
sudo yum remove docker* sudo yum install -y yum-utils #Configure the yum address of docker sudo yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #Install the specified version sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 # Start & start docker systemctl enable docker --now # docker acceleration configuration sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF sudo systemctl daemon-reload sudo systemctl restart docker
Install it on all three servers. After the installation is successful, use the docker command to check whether the installation is successful.
docker ps docker image docker info
Install Kubernetes
Basic environment
First, ensure that the private ip addresses of the three machines can ping each other:
ip a //View the private ip address of the current server ping ip //Mutual ping
Configure the hostname of each machine:
hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node1 hostnamectl set-hostname k8s-node2
# Set SELinux to permissive mode (equivalent to disabling it) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config #Close swap swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab #Allow iptables to check bridge traffic cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system
The current command is executed on all three machines.
Install kubelet, kubedm, kubectl
#Configure k8s the yum source address for cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #Install kubelet, kubedm, kubectl sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 #Launch kubelet sudo systemctl enable --now kubelet #Configure the master domain name for all machines echo "172.24.80.197 k8s-master" >> /etc/hosts
Execute the installation command on all three servers. Please note that echo "172.24.80.197 k8s master" > > / etc / hosts is changed to the private network ip of your master host. Please note.
Initialize master node
Execute the initialization command on the master machine:
kubeadm init \ --apiserver-advertise-address=172.24.80.197 \ --control-plane-endpoint=k8s-master \ --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.20.9 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16
The apiserver advertisement address ip here is changed to the master host ip: control plane endpoint. If there is any change in the previous step, it will be changed together.
Record key information
If the initialization is successful, save the information in the console:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join k8s-master:6443 --token kwfdwx.m1bxp5q5o5qctcmh \ --discovery-token-ca-cert-hash sha256:bf1d6cc1592e4b71e59d3a242513386a1d58683f67bc8adcceb61f2013ee144b \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-master:6443 --token kwfdwx.m1bxp5q5o5qctcmh \ --discovery-token-ca-cert-hash sha256:bf1d6cc1592e4b71e59d3a242513386a1d58683f67bc8adcceb61f2013ee144b
Execute the command to return key information:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install Calico network plug-in
curl https://docs.projectcalico.org/manifests/calico.yaml -O kubectl apply -f calico.yaml
Join worker node
At this time, execute the last section of the key information returned on the node node: the node joins and communicates with the master node.
kubeadm join k8s-master:6443 --token kwfdwx.m1bxp5q5o5qctcmh \ --discovery-token-ca-cert-hash sha256:bf1d6cc1592e4b71e59d3a242513386a1d58683f67bc8adcceb61f2013ee144b
You can view the following in the master node:
kubectl get nodes
[the external chain image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-jrin8emq-1633417729401) (/ users / qurenning / library / Application Support / typera user images / image-20211005144615183. PNG)]
This indicates that the k8s cluster has been built successfully.
Installing the KubeSphere front-end environment
Installing NFS server
yum install -y nfs-utils
Each machine performs a download.
# Execute the following command on the master echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports # Execute the following command to start the nfs service; Create shared directory mkdir -p /nfs/data # Execute on master systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server # Make configuration effective exportfs -r #Check whether the configuration is effective exportfs
Execute commands in the main machine. The above command takes the master as the nfs server.
Configure NFS client
showmount -e 172.24.80.197 mkdir -p /nfs/data mount -t nfs 172.24.80.197:/nfs/data /nfs/data
In the node node, execute the command. Note that the ip is the ip of the master node.
Configure default storage
Create a sc.yaml file in the master node:
## Created a storage class apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" ## When deleting pv, do you want to back up the contents of pv --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 172.24.80.197 ## Specify your own nfs server address - name: NFS_PATH value: /nfs/data ## Directory shared by nfs server volumes: - name: nfs-client-root nfs: server: 172.24.80.197 path: /nfs/data --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
Copy the above command and notice that there are two ip settings. Please set your own master server ip address and save it.
kubectl apply -f sc.yaml
After successful execution, execute the command to view:
kubectl get sc
Create pv storage:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi
Create a pvc.yaml file, copy the above file, and then execute:
kubectl apply -f pvc.yaml
see:
kubectl get pv kubectl get pvc
metrics-server
Cluster indicator monitoring information:
apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --kubelet-insecure-tls - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100
Create a metrics.yaml file and copy the above:
kubectl apply -f metrics.yaml
After successful execution, you can view:
kubectl get pod -A
Install KubeSphere
Download core file
wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
If you have a network card, you can use github. To download the current file.
Modify cluster configuration
I'll execute it here and copy the modified file below. You can directly create a cluster-configuration.yaml and copy it.
--- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.1.1 spec: persistence: storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here. authentication: jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster. local_registry: "" # Add your private registry address if it is needed. etcd: monitoring: true # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it. endpointIps: 172.24.80.197 # etcd cluster EndpointIps. It can be a bunch of IPs here. port: 2379 # etcd port. tlsEnable: true common: redis: enabled: true openldap: enabled: true minioVolumeSize: 20Gi # Minio PVC size. openldapVolumeSize: 2Gi # openldap PVC size. redisVolumSize: 2Gi # Redis PVC size. monitoring: # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data. es: # Storage backend for logging, events and auditing. # elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed. # elasticsearchDataReplicas: 1 # The total number of data nodes. elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log. basicAuth: enabled: false username: "" password: "" externalElasticsearchUrl: "" externalElasticsearchPort: "" console: enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time. port: 30880 alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. enabled: true # Enable or disable the KubeSphere Alerting System. # thanosruler: # replicas: 1 # resources: {} auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants. enabled: true # Enable or disable the KubeSphere Auditing Log System. devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image. enabled: true # Enable or disable the KubeSphere DevOps System. jenkinsMemoryLim: 2Gi # Jenkins memory limit. jenkinsMemoryReq: 1500Mi # Jenkins memory request. jenkinsVolumeSize: 8Gi # Jenkins volume size. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters. jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. enabled: true # Enable or disable the KubeSphere Events System. ruler: enabled: true replicas: 2 logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. enabled: true # Enable or disable the KubeSphere Logging System. logsidecar: enabled: true replicas: 2 metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler). enabled: false # Enable or disable metrics-server. monitoring: storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default. # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability. prometheusMemoryRequest: 400Mi # Prometheus request memory. prometheusVolumeSize: 20Gi # Prometheus PVC size. # alertmanagerReplicas: 1 # AlertManager Replicas. multicluster: clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster. network: networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net. enabled: true # Enable or disable network policies. ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool. type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled. topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope. type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled. openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle. store: enabled: true # Enable or disable the KubeSphere App Store. servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology. enabled: true # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based). kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes. enabled: true # Enable or disable KubeEdge. cloudCore: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] cloudhubPort: "10000" cloudhubQuicPort: "10001" cloudhubHttpsPort: "10002" cloudstreamPort: "10003" tunnelPort: "10004" cloudHub: advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. nodeLimit: "100" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" edgeWatcher: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: [] edgeWatcherAgent: nodeSelector: {"node-role.kubernetes.io/worker": ""} tolerations: []
Note: the endpointIps here is changed to the ip of the host machine.
Then execute the command in the master server:
kubectl apply -f kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml
View installation progress
Please wait patiently and use the current command to check the installation progress. After the installation is successful, you will be prompted with the login user name and password. Security group release port.
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
Access 30880 port of any machine
Account No.: admin
password: P@88w0rd
Solve the problem that etcd monitoring certificate cannot be found
When viewing the pod, you will find that a monitored prometheus cannot be downloaded. Just execute the following command:
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
Then, after logging in, the installation is successful.
If this article is helpful to you, please give the author a key three links, thank you