K8S: Helm Deployment ELK 7.6
scene
Deploy a stateful application ELK on K8S to collect reports of daily test data (eartbeat for application dialing, APM for call chain tracking, metabeat for performance indicators, etc.).This article provides the underlying storage through root to install elk's statefulset, deploy MetalLB to achieve local load balancing, and finally access kibana through ingress-control.
Operation steps
- 1. Install rook
- 2. Install helm
- 3. Install ES
- 4. Install kibana
- 5. Install filebeat
- 6. Install metalLB
- 7. Install Ingress-control
- 8. Access testing
1. Install rook
1.1 Installation InstructionsRook is a file, block, object storage service dedicated to the Cloud-Native environment.It implements a self-managed, self-expanding, self-repairing distributed storage service.Rook supports automatic deployment, startup, configuration, provisioning, expansion/scaling, upgrade, migration, disaster recovery, monitoring, and resource management.To accomplish all these functions, Rook relies on the underlying container orchestration platform.
Ceph is a distributed storage system that supports file, block and object storage and is widely used in production environments.
This installation simplifies the installation management configuration of ceph by deploying ceph through root.It is also designed to make use of local resources.Provide a storageclass.
1.2 rook and ceph architecture 1.2 Install cephgit profile
git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git cd cluster/examples/kubernetes/ceph
Install CRD to set RBAC permissions
kubectl apply -f common.yaml
Install operator to set various resources for ceph startup
kubectl apply -f operator.yaml
Install ceph cluster
kubectl apply -f cluster-test.yaml
Install ceph management tools
kubectl apply -f toolbox.yaml1.3 Notes
-
rook's mirrors are not downloaded from dockerhub by default, mostly from the quay.io repository. Mirror pulling is slow. If created directly from kubectl, the runtime will get an error due to the download timeout, so it is best to download them well in advance.
-
As far as I know, the new version of root no longer supports mounting local directories to provide ceph storage.So you can only mount new disks locally.Let the operator program automatically detect and discover hard disk devices for use.(If you want to mount the local directory to provide ceph storage, you can test it by changing ROOK_HOSTPATH_REQUIRES_PRIVILEGED=true in operator.yml, but I did not try it.)
-
Rook has used csi to mount CEPH and flex has been phased out.However, there is a big pit in csi mode, that is, kernel > 4.10 can be used normally, otherwise many strange errors will be reported.For example, "rook RPC error: code = Internal desc = an error occurred while running (790) mount-t ceph", refer to On the failure of Mount PVC by pod
-
If you want to mount ceph using flex, remember to change "ROOK_ENABLE_FLEX_DRIVER=true" in operator.yml, and remember to use flex when creating storageclass.yml Configuration under directory.
-
Upgrade Kernel Reference centos7 Upgrade Kernel
-
Add disk scan locally:
echo "- - -" > /sys/class/scsi_host/host*/scan fdisk -l
-
Since ceph requires 3+ nodes, I removed the master stain in order to use the master node.
kubectl taint node master node-role.kubernetes.io/master:NoSchedule-
View root status
[root@docker1 ~]# kubectl get pods -n rook-ceph NAME READY STATUS RESTARTS AGE csi-cephfsplugin-2gb6l 3/3 Running 0 3h18m csi-cephfsplugin-hnwkn 3/3 Running 0 3h18m csi-cephfsplugin-provisioner-7b8fbf88b4-2wchx 4/4 Running 0 3h18m csi-cephfsplugin-provisioner-7b8fbf88b4-6bjb8 4/4 Running 0 3h18m csi-cephfsplugin-s5mln 3/3 Running 0 3h18m csi-rbdplugin-provisioner-6b8b4d558c-2nhrk 5/5 Running 0 3h18m csi-rbdplugin-provisioner-6b8b4d558c-8n57n 5/5 Running 0 3h18m csi-rbdplugin-qv5qt 3/3 Running 0 3h18m csi-rbdplugin-rtsxc 3/3 Running 0 3h18m csi-rbdplugin-v67hv 3/3 Running 0 3h18m rook-ceph-crashcollector-docker3-6fbc6c7fc6-trmtz 1/1 Running 0 78m rook-ceph-crashcollector-docker4-fcb9bf67c-kk7v4 1/1 Running 0 78m rook-ceph-crashcollector-docker5-7775464c9b-dv4bh 1/1 Running 0 78m rook-ceph-mds-myfs-a-7f4cdb685d-k9skd 1/1 Running 0 78m rook-ceph-mds-myfs-b-5847d89857-254d9 1/1 Running 0 78m rook-ceph-mgr-a-5b5f4588b7-bsf72 1/1 Running 0 3h17m rook-ceph-mon-a-965f65f76-gv2fv 1/1 Running 0 3h17m rook-ceph-operator-69f856fc5f-2dp9h 1/1 Running 0 3h19m rook-ceph-osd-0-7879445dfc-5g48g 1/1 Running 0 3h17m rook-ceph-osd-1-6bc695c476-v5xzt 1/1 Running 0 3h17m rook-ceph-osd-2-c96b7b6b7-cnhb7 1/1 Running 0 3h16m rook-ceph-osd-prepare-docker3-lgrwm 0/1 Completed 0 3h17m rook-ceph-osd-prepare-docker4-qp56x 0/1 Completed 0 3h17m rook-ceph-osd-prepare-docker5-q5czc 0/1 Completed 0 3h17m rook-ceph-tools-7d764c8647-ftxsb 1/1 Running 0 3h15m rook-discover-bgsf6 1/1 Running 0 3h18m rook-discover-ldc7k 1/1 Running 0 3h18m rook-discover-w2wb6 1/1 Running 0 3h18m
View ceph status
kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash ceph status ceph osd status ceph df rados df
2. Install helm
2.1 Installation instructionshelm is k8s, equivalent to yum is centos.Is the package manager for k8s.Specific side reference helm Tutorial
2.2 Install helmInstall helmv3 version
wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz tar -xzvf helm-v3.1.2-linux-amd64.tar.gz cd linux-amd64 && mv helm /usr/bin/2.3 Install the helm repository
Add repo repository provided by elk
helm repo add elastichttps://helm.elastic.co helm repo search elastic2.4 Notes
- helmv2 has been discarded, so install helmv3 instead
3. Install ES
3.1 Installation InstructionsElasticsearch is a real-time, distributed, scalable search engine that allows full-text, structured searches. It is often used to index and search large amounts of log data, as well as to search many different types of documents.
3.2 Install ESThe elastic installed by default by helm requires too much space, so I downloaded the elastic helm package locally, modified the value of volume, and added the storageClass name to match the PVC template.
Modify volume ClaimTemplate in statefulsets. Reference for understanding here kubernetes'understanding of statefulset
helm pull elastic/elasticsearch tar -zxvf elasticsearch-7.6.1.tgz cd elasticsearch vim values.yaml volumeClaimTemplate: storageClassName: "csi-cephfs" accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 3Gi
Install es
helm install elastic ./3.3 View running status
View StatefulSet
[root@docker1 elasticsearch]# kubectl get sts -owide NAME READY AGE CONTAINERS IMAGES elasticsearch-master 3/3 124m elasticsearch docker.elastic.co/elasticsearch/elasticsearch:7.6.1
Look at the Pod, there are three nodes, and the Pod name is sequenced, which is different from Deployment's randomly named Pod.
[root@docker1 elasticsearch]# kubectl get pods NAME READY STATUS RESTARTS AGE elasticsearch-master-0 1/1 Running 0 125m elasticsearch-master-1 1/1 Running 0 125m elasticsearch-master-2 1/1 Running 0 125m
View PVC and storageclass
[root@docker1 elasticsearch]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 137m [root@docker1 elasticsearch]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-4ebff26d-d2d1-4f61-9d73-9141f631f780 1Gi RWX csi-cephfs 121m elasticsearch-master-elasticsearch-master-0 Bound pvc-2cb96692-20f1-44ba-a307-3daeb4e7aded 3Gi RWO csi-cephfs 126m elasticsearch-master-elasticsearch-master-1 Bound pvc-323697ae-065a-4d1b-a9ec-3fabc66a4419 3Gi RWO csi-cephfs 126m elasticsearch-master-elasticsearch-master-2 Bound pvc-95f070f9-a4e0-4cd0-a8c9-969f69521063 3Gi RWO csi-cephfs 126m3.4 Notes
- storageclass and pvc have been created, but pod cannot be mounted.Kernel > 4.10 was found to work properly, otherwise many strange errors would be reported.For example, "rook RPC error: code = Internal desc = an error occurred while running (790) mount-t ceph", refer to On the failure of Mount PVC by pod
- Be aware to modify the es configuration file to suit your needs and remember to change it in advance.
- If an error occurs, use the kubectl log, kubectl describe to find where the file is located.
4. Deploy Kibana
4.1 Installation instructionskibana is the visualization, analysis, and exploration of elastic data.
4.2 InstallationDownload kibana's helm package locally and modify the value of volume.among
helm pull elastic/kinaba tar -zxvf kibana-7.6.1.tgz cd kibana vim values.yaml #The configuration file for kibana has been modified here, default location/usr/share/kibana/kibana.yaml kibanaConfig: kibana.yml: | server.port: 5601 server.host: "0.0.0.0" elasticsearch.hosts: [ "http://elasticsearch-master:9200" ] #Open ingress, you can see the effect later ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - kibana-kibana helm install kibana ./4.3 View running status
[root@docker1 kibana]# kubectl get pods -l app=kibana NAME READY STATUS RESTARTS AGE kibana-kibana-6b65d94756-p4tr4 1/1 Running 0 74m4.4 Notes
-
Error installing kibana, status READY 0/1, error content is {"type": "log", @timestamp":"2020-03-31T10:22:21Z","tags": ["warning","savedobjects-service"],"pid": 6,"message":"Unable to connect to Elastic search. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/mN6of5kpT86Eyj9Byyu] exist Ig already,With "}", delete the.Index file in es here.Reference Links kibana-task-manager error
#Here is the svc address of 10.96.193.elasticsearch-master. curl -X DELETE http://10.96.193.2:9200/.kibana*
5. Install Filebeat
5.1 Installation InstructionsFilebeats are log data collectors for local files that monitor log directories or tail file s and forward them to Elasticsearch or Logstatsh for indexing, kafka, and so on.
5.2 Install filebeathelm install filebeat elastic/filebeat5.3 View running status
[root@docker1 ingress]# kubectl get pods -l app=filebeat-filebeat NAME READY STATUS RESTARTS AGE filebeat-filebeat-6clnj 1/1 Running 0 18h filebeat-filebeat-kpzcb 1/1 Running 0 18h filebeat-filebeat-xpxnz 1/1 Running 0 18h
6. Deploy metalLB
6.1 Installation instructionsMetalLB provides a network load balancer implementation for the Kubernetes cluster.In short, it allows you to create a Kubernetes service of type "LoadBalancer" in a cluster running locally.
In Layer 2 mode, all traffic for service IP flows to one node.kube-proxy propagates traffic to all service Pod s.
6.2 Install metalLBInstall metallb
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Configure config files with Layer2 type to increase the range of locally available addresses.
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.14.161-192.168.14.165 kubectl apply -f config.yaml6.3 View running status
Information about getting addresses can be found at the address of lb in the ingress-service later.
[root@docker1 ingress]# kubectl get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-5c9894b5cd-hrhdf 1/1 Running 0 17m speaker-68hzp 1/1 Running 0 17m speaker-892bv 1/1 Running 0 17m speaker-l9mcg 1/1 Running 0 17m speaker-m4pbl 1/1 Running 0 17m speaker-q4jrq 1/1 Running 0 17m speaker-qwhqm 1/1 Running 0 17m6.4 Notes
-
Since Layer 2 mode is used, addresses must be assigned to unused addresses on the same network segment.
-
Metalb needs kube-proxy to implement its functions. You can choose how to implement kube-proxy. If you change to ipvs, you can refer to the configuration MetalLB Installation
-
External Traffic Policy can be chosen in two ways: cluster and local.Cluster:kube-proxy load balances the nodes that receive traffic and distributes traffic to all Pods in the service.Local:kube-proxy performs load balancing on the nodes that receive traffic and distributes traffic to all Pods in the service.
- The BGP mode can be explored on its own.
7. Deploy Ingress
7.1 Installation instructionsManage API objects for external access to services (typically HTTP) in the cluster.Ingress provides load balancing, SSL terminals, and name-based virtual hosts.Here is an internal request for address access with metallb.If you don't match metallb, simply using ingress can be cumbersome in choosing this place on the port.
7.2 Install ingressInstall deployment
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
Download the modified service file and externalTrafficPolicy=Cluster (recommended).
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: externalTrafficPolicy: Cluster type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https --- kubectl apply -f cloud-generic.yaml7.3 View operational status
View ingress running status
[root@docker1 ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.96.25.12 192.168.14.161 80:31622/TCP,443:31187/TCP 20s
Access port 80 for testing
[root@docker1 ingress]# curl 192.168.14.161 <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.17.8</center> </body> </html>
8. Configure ES access
8.1 Configuration NotesWhen kibana was installed, the ingress option was turned on, so we can directly see the ingress information. If it was not turned on at that time, we can also write a kibana-ingress.yml file ourselves.
#Configuration host is named kibana-kibana #Configuration url suffix matching address is/ #Configuration backend service name is kibana-kibana, port is 5601 apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: kibana annotations: # use the shared ingress-nginx kubernetes.io/ingress.class: "nginx" spec: rules: - host: kibana-kibana http: paths: - path: / backend: serviceName: kibana-kibana servicePort: 5601 kubectl apply -f kibana-ingress.yaml8.2 View running status
[root@docker1 ingress]# kubectl get ingresses. NAME HOSTS ADDRESS PORTS AGE kibana-kibana kibana-kibana 192.168.14.161 80 15h8.3 Visit kibana
We added 192.168.14.161 kibana-kibana to the local configuration hosts (kibana-kibana here is the name of the host configuration in ingres).Then the web page goes directly to http://kibana-kibana
kibana does not have xpack turned on, so monitor cannot be turned on.
Add filebeat index
View discover
Follow-up
Just roughly set up the elk cluster and slowly integrate later.
Not yet completed.
Reference resources:Rook-based Kubernetes storage scheme
kubernetes'understanding of statefulset