K8S cluster deployment Kube Prometheus monitoring etcd

K8S cluster deployment Kube Prometheus monitoring etcd

1, Background

Reprinted from: https://cloud.tencent.com/developer/article/1760581
Slightly adjusted.
In addition to monitoring some resource objects, nodes and components in Kubernetes cluster, sometimes we may need to add custom monitoring items according to actual business requirements. The steps of adding a custom monitoring are also very simple, mainly including the following three steps:

The first step is to establish a ServiceMonitor Object for Prometheus Add monitoring items;
The second step is ServiceMonitor Object association metrics One of the data interfaces Service Object;
The third step is to ensure Service Object can be obtained correctly Metrics data

Kubernetes cluster monitoring Kube Prometheus deployment has been configured above, but there is no ETCD monitoring information. Next, let's show you how to add ETCD cluster monitoring. Whether it is an ETCD cluster outside the kubernetes cluster or an ETCD cluster installed inside the cluster using kubedm, we regard it as an independent cluster outside the cluster, because there is nothing special about the use of the two.

be careful:

ETCD yes K8S The core component of the cluster is the database.

Our etcd is binary deployment, that is, it is not deployed in docker or other ways.

2, Configure etcd service

① . view etcd startup configuration

[root@k8s01 manifests]# cat /etc/systemd/system/etcd.service 
Description=Etcd Server

ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/data/k8s/etcd_new/etcd.restore \
  --wal-dir=/data/k8s/etcd_new/wal \
  --name=k8s01 \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls= \
  --initial-advertise-peer-urls= \
  --listen-client-urls=, \
  --advertise-client-urls= \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=k8s01=,k8s02=,k8s03= \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \


We can see that there is a -- listen Metrics URLs in the startup parameters= The parameter is used to specify that the Metrics interface runs under port 2381 and is an HTTP protocol. Therefore, no certificate configuration is required, which is much simpler than the previous version. The previous version needs to be accessed by https protocol, So configure the corresponding certificate.

If not, add this line directly -- listen metrics URLs= http://ip:2381 Restart the etcd service at the startup file.

3, k8s deploy ServiceMonitor, service & endpoints

① , deploy ServiceMonitor

$ vi prometheus-serviceMonitorEtcd.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
  name: etcd-k8s
  namespace: monitoring
    k8s-app: etcd-k8s
  jobLabel: k8s-app
  - port: port
    interval: 15s
      k8s-app: etcd
    - kube-system

Above, we created a ServiceMonitor object named etcd-k8s under the monitoring namespace. Its basic properties are consistent with those described above. It matches the Service with the label k8s app = etcd under the Kube system namespace. jobLabel represents the label used to retrieve the job task name. Because the metrics interface of etcd is under port 2381, https security authentication is not required, so use the default configuration. Then we can directly create this ServiceMonitor object:

kubectl apply -f prometheus-serviceMonitorEtcd.yaml

② , create service & endpoints

Because ETCD is independent of the cluster, we need to create an endpoint to proxy it to the Kubernetes cluster, and then create a Service binding endpoint. Then the application of Kubernetes cluster can access the ETCD cluster.

$ vi prometheus-etcdService.yaml
apiVersion: v1
kind: Service
  name: etcd-k8s
  namespace: kube-system
    k8s-app: etcd
  type: ClusterIP
  clusterIP: None  #Set to None, do not assign Service IP
  - name: port
    port: 2381
apiVersion: v1
kind: Endpoints
  name: etcd-k8s
  namespace: kube-system
    k8s-app: etcd
- addresses:
  - ip:   # Specify the etcd node address. If it is a cluster, continue to add it downward
    nodeName: etc-master
  - name: port
    port: 2381         # Etcd port number

The Service we created here does not match the Pod in the form of label label, because we said that many times the etcd cluster we created is independent of the cluster. In this case, we need to customize an endpoint. Note that the content of the metadata area should be consistent with the Service, The clusterIP of the Service is set to None, and the new version of etcd places the metrics interface data on port 2381.

Create service & endpoints:

[root@k8s01 prometheus-etcd]# kubectl apply -f prometheus-etcdService.yaml 
service/etcd-k8s created
endpoints/etcd-k8s created
[root@k8s01 prometheus-etcd]# kubectl apply -f prometheus-serviceMonitorEtcd.yaml 
servicemonitor.monitoring.coreos.com/etcd-k8s created
[root@k8s01 prometheus-etcd]# kubectl get svc -n kube-system -l k8s-app=etcd
etcd-k8s   ClusterIP   None         <none>        2381/TCP   22s

4, View prometheus targets

After the above configuration is completed, check the targets in Prometheus Dashboard every few minutes, and there will be ETCD monitoring items:
Wait patiently and the monitoring information will come out.

5, Grafana introduces ETCD instrument cluster

After the Prometheus configuration is completed, directly open the Grafana page, introduce the Dashboard, and enter the Dashboard with the number "3070":

design sketch:

Tags: Docker Kubernetes etcd

Posted on Tue, 09 Nov 2021 23:07:00 -0500 by DanAuito