K8S cluster deployment Kube Prometheus monitoring etcd

K8S cluster deployment Kube Prometheus monitoring etcd

1, Background

Reprinted from: https://cloud.tencent.com/developer/article/1760581
Slightly adjusted.
In addition to monitoring some resource objects, nodes and components in Kubernetes cluster, sometimes we may need to add custom monitoring items according to actual business requirements. The steps of adding a custom monitoring are also very simple, mainly including the following three steps:

The first step is to establish a ServiceMonitor Object for Prometheus Add monitoring items;
The second step is ServiceMonitor Object association metrics One of the data interfaces Service Object;
The third step is to ensure Service Object can be obtained correctly Metrics data

Kubernetes cluster monitoring Kube Prometheus deployment has been configured above, but there is no ETCD monitoring information. Next, let's show you how to add ETCD cluster monitoring. Whether it is an ETCD cluster outside the kubernetes cluster or an ETCD cluster installed inside the cluster using kubedm, we regard it as an independent cluster outside the cluster, because there is nothing special about the use of the two.

be careful:

ETCD yes K8S The core component of the cluster is the database.

Our etcd is binary deployment, that is, it is not deployed in docker or other ways.

2, Configure etcd service

① . view etcd startup configuration

[root@k8s01 manifests]# cat /etc/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd_new
ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/data/k8s/etcd_new/etcd.restore \
  --wal-dir=/data/k8s/etcd_new/wal \
  --name=k8s01 \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://172.16.1.11:2380 \
  --initial-advertise-peer-urls=https://172.16.1.11:2380 \
  --listen-client-urls=https://172.16.1.11:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://172.16.1.11:2379 \
  --listen-metrics-urls=http://172.16.1.11:2381
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=k8s01=https://172.16.1.11:2380,k8s02=https://172.16.1.12:2380,k8s03=https://172.16.1.13:2380 \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

We can see that there is a -- listen Metrics URLs in the startup parameters= http://172.16.1.11:2381 The parameter is used to specify that the Metrics interface runs under port 2381 and is an HTTP protocol. Therefore, no certificate configuration is required, which is much simpler than the previous version. The previous version needs to be accessed by https protocol, So configure the corresponding certificate.

If not, add this line directly -- listen metrics URLs= http://ip:2381 Restart the etcd service at the startup file.

3, k8s deploy ServiceMonitor, service & endpoints

① , deploy ServiceMonitor

$ vi prometheus-serviceMonitorEtcd.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: etcd-k8s
  namespace: monitoring
  labels:
    k8s-app: etcd-k8s
spec:
  jobLabel: k8s-app
  endpoints:
  - port: port
    interval: 15s
  selector:
    matchLabels:
      k8s-app: etcd
  namespaceSelector:
    matchNames:
    - kube-system

Above, we created a ServiceMonitor object named etcd-k8s under the monitoring namespace. Its basic properties are consistent with those described above. It matches the Service with the label k8s app = etcd under the Kube system namespace. jobLabel represents the label used to retrieve the job task name. Because the metrics interface of etcd is under port 2381, https security authentication is not required, so use the default configuration. Then we can directly create this ServiceMonitor object:

kubectl apply -f prometheus-serviceMonitorEtcd.yaml

② , create service & endpoints

Because ETCD is independent of the cluster, we need to create an endpoint to proxy it to the Kubernetes cluster, and then create a Service binding endpoint. Then the application of Kubernetes cluster can access the ETCD cluster.

$ vi prometheus-etcdService.yaml
apiVersion: v1
kind: Service
metadata:
  name: etcd-k8s
  namespace: kube-system
  labels:
    k8s-app: etcd
spec:
  type: ClusterIP
  clusterIP: None  #Set to None, do not assign Service IP
  ports:
  - name: port
    port: 2381
---
apiVersion: v1
kind: Endpoints
metadata:
  name: etcd-k8s
  namespace: kube-system
  labels:
    k8s-app: etcd
subsets:
- addresses:
  - ip: 172.16.1.11   # Specify the etcd node address. If it is a cluster, continue to add it downward
    nodeName: etc-master
  ports:
  - name: port
    port: 2381         # Etcd port number

The Service we created here does not match the Pod in the form of label label, because we said that many times the etcd cluster we created is independent of the cluster. In this case, we need to customize an endpoint. Note that the content of the metadata area should be consistent with the Service, The clusterIP of the Service is set to None, and the new version of etcd places the metrics interface data on port 2381.

Create service & endpoints:

[root@k8s01 prometheus-etcd]# kubectl apply -f prometheus-etcdService.yaml 
service/etcd-k8s created
endpoints/etcd-k8s created
[root@k8s01 prometheus-etcd]# kubectl apply -f prometheus-serviceMonitorEtcd.yaml 
servicemonitor.monitoring.coreos.com/etcd-k8s created
[root@k8s01 prometheus-etcd]# kubectl get svc -n kube-system -l k8s-app=etcd
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
etcd-k8s   ClusterIP   None         <none>        2381/TCP   22s

4, View prometheus targets

After the above configuration is completed, check the targets in Prometheus Dashboard every few minutes, and there will be ETCD monitoring items:
Wait patiently and the monitoring information will come out.

5, Grafana introduces ETCD instrument cluster

After the Prometheus configuration is completed, directly open the Grafana page, introduce the Dashboard, and enter the Dashboard with the number "3070":

design sketch:

Tags: Docker Kubernetes etcd

Posted on Tue, 09 Nov 2021 23:07:00 -0500 by DanAuito