kubernets cluster building web management interface

K8S Cluster Builds web Management Interface

1. View K8S cluster status before deployment

[root@master1 ~]# kubectl get nodes
NAME              STATUS     ROLES    AGE     VERSION
192.168.191.131   NotReady   <none>   7d22h   v1.12.3
192.168.191.132   Ready      <none>   7d21h   v1.12.3
[root@master1 ~]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-sx4m6   1/1     Running   0          5d14h

2. Deploy UI interface in master node
1. Create dashboard working directory

[root@master1 ~]# mkdir /k8s/dashboard

2. Push official documents locally
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard

Five import documents are detailed:

  • dashboard-configmap.yaml Configuration Mapping Service
  • dashboard-deployment.yaml controller
  • dashboard-rbac.yaml role control, access control
  • dashboard-secret.yaml security
  • dashboard-service.yaml service
    Steps to create these resources:
    Identity Roles
    (2) Safety
    (3) Configuration mapping
    (4) Controller
    Service
    Here I use version 1.8.4 of dashboard.There is a configuration file called controller.yaml in version 1.8.4, now version 1.10, with the name deployment.yaml changed, which are all controllers.
    [root@master1 ~]# cd /k8s/dashboard/
    [root@master1 dashboard]# ls
    [root@master1 dashboard]# ls
    dashboard-configmap.yaml  dashboard-controller.yaml  dashboard-rbac.yaml  dashboard-secret.yaml  dashboard-service.yaml  k8s-admin.yaml

    3. Create pod resources based on yaml files

  • View the current status of individual resources
    Namespace
    [root@master1 dashboard]# kubectl get ns
    NAME          STATUS   AGE
    default       Active   7d23h
    kube-public   Active   7d23h
    kube-system   Active   7d23h
    [root@master1 dashboard]# kubectl get pod
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-dbddb74b8-sx4m6   1/1     Running   0          5d14h
    [root@master1 dashboard]# kubectl get pod -n kube-system
    No resources found.

    kubectl get all //This all contains four resources: pod, deployment, service, and replicaset

    [root@master1 dashboard]# kubectl get all   #This all contains four resources: pod, deployment, service, and replicaset
    NAME                        READY   STATUS    RESTARTS   AGE
    pod/nginx-dbddb74b8-sx4m6   1/1     Running   0          5d14h
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   7d23h
    NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx   1         1         1            1           5d14h
    NAME                              DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-dbddb74b8   1         1         1       5d14h

    View roles in the current k8s

    [root@master1 dashboard]# kubectl get Role -n kube-system
    NAME                                             AGE
    extension-apiserver-authentication-reader        7d23h
    system::leader-locking-kube-controller-manager   7d23h
    system::leader-locking-kube-scheduler            7d23h
    system:controller:bootstrap-signer               7d23h
    system:controller:cloud-provider                 7d23h
    system:controller:token-cleaner                  7d23h
  • Create rbac resource
    [root@master1 dashboard]# kubectl create -f dashboard-rbac.yaml 
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  • View resource status after creation
    [root@master1 dashboard]# kubectl get all
    NAME                        READY   STATUS    RESTARTS   AGE
    pod/nginx-dbddb74b8-sx4m6   1/1     Running   0          5d14h
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   7d23h
    NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx   1         1         1            1           5d14h
    NAME                              DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-dbddb74b8   1         1         1       5d14h

    View roles. In the configuration file, the role's namespace specifies kube-system, so specify -n when viewing

    [root@master1 dashboard]# kubectl get role -n kube-system
    NAME                                             AGE
    extension-apiserver-authentication-reader        7d23h
    kubernetes-dashboard-minimal                     3m
    system::leader-locking-kube-controller-manager   7d23h
    system::leader-locking-kube-scheduler            7d23h
    system:controller:bootstrap-signer               7d23h
    system:controller:cloud-provider                 7d23h
    system:controller:token-cleaner                  7d23h
    #Create Identity Roles
    [root@localhost dashboard]# kubectl create -f dashboard-rbac.yaml 
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    #Create Security Management
    [root@localhost dashboard]# kubectl create -f dashboard-secret.yaml 
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-key-holder created
    #Configure Mapping Service
    [root@localhost dashboard]# kubectl create -f dashboard-configmap.yaml 
    configmap/kubernetes-dashboard-settings created
    #Create Controller
    #This article creates version 1.84 so controller.yaml is used, and deployment.yaml is used in version 1.10. Both are the same, they are controllers
    [root@localhost dashboard]# kubectl create -f dashboard-controller.yaml 
    serviceaccount/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    #Create Service
    [root@localhost dashboard]# kubectl create -f dashboard-service.yaml 
    service/kubernetes-dashboard created

    5. View creation after completion under the specified kube-system namespace

    [root@localhost dashboard]# kubectl get pods -n kube-system
    NAME                                    READY   STATUS              RESTARTS   AGE
    kubernetes-dashboard-65f974f565-m9gm8   0/1     ContainerCreating   0          88s

6. View access addresses

[root@localhost dashboard]# kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/kubernetes-dashboard-65f974f565-m9gm8   1/1     Running   0          2m49s

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kubernetes-dashboard   NodePort   10.0.0.243   <none>        443:30001/TCP   2m24s

ip address of node accessed at this time
Discovery is not accessible because it is an untrusted certificate at this time

7. Create Certificates

[root@localhost dashboard]# vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF
{
   "CN": "Dashboard",
   "hosts": [],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "BeiJing",
           "ST": "BeiJing"
       }
   ]
}
EOF

K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system

#Execute Generate Certificate
[root@localhost dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
#Add two lines of certificate under the args tag of dashboard-controller.yaml
[root@localhost dashboard]# vim dashboard-controller.yaml
args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
          - --tls-key-file=dashboard-key.pem
          - --tls-cert-file=dashboard.pem

#Redeployment
[root@localhost dashboard]# kubectl apply -f dashboard-controller.yaml

Normal access after certificate generation

Generate token

AGE
dashboard-admin-token-vnm9z        kubernetes.io/service-account-token   3      65s
default-token-zb8bw                kubernetes.io/service-account-token   3      8d
kubernetes-dashboard-certs         Opaque                                11     162s
kubernetes-dashboard-key-holder    Opaque                                2      262s
kubernetes-dashboard-token-ctfp9   kubernetes.io/service-account-token   3      62s
#View tokens
[root@localhost dashboard]# kubectl describe secret dashboard-admin-token-vnm9z -n kube-system
Name:         dashboard-admin-token-vnm9z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: de06f523-905f-11ea-80d3-000c29535012

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
//Copy the following token information to log in
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdm5tOXoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGUwNmY1MjMtOTA1Zi0xMWVhLTgwZDMtMDAwYzI5NTM1MDEyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.gfj0Yba5aexCLCDiPp2MzFEesuFUOxqJf0HFofijRm5_MjucfsLVdIgWg4eIS8Vuf8Fz7JX0sqhhDN-j4KgNAfIi7ZwREDC73NExYCTpbcBZSVff9MA0ynmLcAySRUToDNS58My2ZQpPsDokI0-wrOyql-VQcTgKdJ3Qwj6wdZVvBGXJlWzDS4AxSZTdJVGJtrfN9SNr1372wqWY7QLJj3zn-mc6F5eLU-bR9DJ7909qSV7Vh-XSJtzbRpbxQk9AGo5r1Rb2I04fchiVLVVE8K362bLtGkjXulmybya_t1naG0_YRlOZDG3GOQcKG0KyvYcFjPWLX89uop7u2Tl5Kg


Here the web management interface for the K8S cluster is set up

Tags: Kubernetes Nginx JSON vim

Posted on Sun, 10 May 2020 22:53:38 -0400 by predator12341