1, What's Helm
Before using Helm, to deploy applications in Kubernetes, we need to deploy deployment, service, etc. in turn. The steps are cumbersome. Moreover, with the microservicing of many projects, it is more complex to deploy and manage complex applications in containers.
helm supports release version management and control through packaging, which greatly simplifies the deployment and management of Kubernetes applications.
The essence of Helm is to make k8s's application management (Deployment, Service, etc.) configurable and dynamically generated. Dynamically generate k8s resource list files (deployment.yaml, service.yaml). Then kubectl automatically calls k8s resource Deployment.
Helm is an official package management service similar to YUM. It is the process encapsulation of the deployment environment. Helm has three important concepts: chart, release and Repository
- Chart is a collection of information to create an application, including configuration templates, parameter definitions, dependencies, document descriptions, etc. of various Kubernetes objects. You can think of chart as a software installation package in apt and yum.
- Release is the running instance of chart and represents a running application. When chart is installed in the Kubernetes cluster, a release is generated. Chart can be installed in the same cluster multiple times, and each installation is a release [multiple releases can be deployed according to different chart assignments].
- Repository is a repository for publishing and storing charts.
2, Helm components and related terms
Helm consists of two components: helm client and Tiller server, as shown in the following figure:
- Helm client is responsible for the creation and management of chart and release and the interaction with Tiller.
- The Tiller server runs in the Kubernetes cluster. It processes requests from Helm clients and interacts with the Kubernetes API Server.
1,Helm
Helm is a client tool on the command line. It is mainly used to create, package, publish Kubernetes application Chart, and create and manage local and remote Chart warehouses.
2,Tiller
Tiller is the server of Helm and is deployed in the Kubernetes cluster. Tiller is used to receive Helm's request, generate Kubernetes' deployment file (Helm is called Release) according to Chart, and then submit it to Kubernetes to create an application. Tiller also provides a series of functions such as upgrade, deletion and rollback of Release.
3,Chart
Helm's software package adopts TAR format. Similar to APT's DEB package or YUM's RPM package, it contains a set of YAML files that define Kubernetes resources.
4,Repoistory
In helm's software warehouse, Repository is essentially a Web server, which saves a series of Chart packages for users to download, and provides a list file of Chart packages of the Repository for query. Helm can manage multiple different repositories at the same time.
5,Release
The Chart deployed in the Kubernetes cluster using the helm install command is called Release.
Note: it should be noted that the Release mentioned in Helm is different from the version in our common concept. The Release here can be understood as an application instance deployed by Helm using Chart package.
3, Helm deployment
1. HelmV2.0 deployment
Now more and more companies and teams begin to use helm, the package manager of Kubernetes. We will also use helm to install common components of Kubernetes. Helm consists of the client command helm tool and the server tiller.
GitHub address of helm: https://github.com/helm/helm
This deployment version
The current version has been updated, and this thing is updated very quickly;
1.1 Helm installation and deployment
[root@k8s-master software]# pwd /root/software [root@k8s-master software]# wget https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz [root@k8s-master software]# [root@k8s-master software]# tar xf helm-v2.16.9-linux-amd64.tar.gz [root@k8s-master software]# ll total 12624 -rw-r--r-- 1 root root 12926032 Jun 16 06:55 helm-v3.2.4-linux-amd64.tar.gz drwxr-xr-x 2 3434 3434 50 Jun 16 06:55 linux-amd64 [root@k8s-master software]# [root@k8s-master software]# cp -a linux-amd64/helm /usr/bin/helm
Because Kubernetes API Server enables RBAC access control, you need to create Tiller's service account:tiller and assign appropriate roles to it. Here, for simplicity, we directly assign cluster admin to the built-in ClusterRole of the cluster.
[root@k8s-master helm]# pwd /root/k8s_practice/helm [root@k8s-master helm]# [root@k8s-master helm]# cat rbac-helm.yaml apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system [root@k8s-master helm]# [root@k8s-master helm]# kubectl apply -f rbac-helm.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
Initialize Helm's client and server
[root@k8s-master helm]# helm init --service-account tiller .................. [root@k8s-master helm]# kubectl get pod -n kube-system -o wide | grep 'tiller' tiller-deploy-8488d98b4c-j8txs 0/1 Pending 0 38m <none> <none> see pod It is found that the image pull failed because it defaults to pulling foreign images goole Mirror image, you need to climb the wall. I don't climb the wall here [root@k8s-master helm]# kubectl describe pod tiller-deploy-8488d98b4c-j8txs -n kube-system
Therefore, you need to modify the image address and use the update command
[root@k8s-master helm]# helm init --upgrade --tiller-image registry.cn-beijing.aliyuncs.com/google_registry/tiller:v2.16.9 [root@k8s-master helm]# ### Wait a while [root@k8s-master helm]# kubectl get pod -o wide -A | grep 'till' kube-system tiller-deploy-7b7787d77-zln6t 1/1 Running 0 8m43s 10.244.4.123 k8s-node01 <none> <none>
If the update command fails, do the following
We found that as long as tiller is installed, it will automatically generate a deployment
Therefore, we directly change the deployment image
[root@k8s-master efk]# kubectl edit deploy tiller-deploy -n kube-system
After modification, it will load automatically and wait for a while
[root@k8s-master efk]# kubectl get pod -n kube-system
[root@k8s-master efk]# helm version Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:" clean"}Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:" dirty"}
2. HelmV3.0 deployment
The biggest difference between Helm V3 and V2 is that tiller is removed:
2.1 Helm installation and deployment
Download address of installation package: https://github.com/helm/helm/releases , latest version 3.7.0
Download the package: helm-v3.7.0-linux-amd64.tar.gz
[root@k8s-master ~]# wget https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz [root@k8s-master ~]# tar xf helm-v3.7.0-linux-amd64.tar.gz [root@k8s-master ~]# cp linux-amd64/helm /usr/bin/helm
helm deployment is complete. It can be seen that the deployment of version v3 is much more convenient than v2
4, Helm use
1. Use of Helm warehouse
1.1 Helm uses the charts source address by default
[root@k8s-master templates]# helm repo list NAME URL local http://127.0.0.1:8879/charts stable https://kubernetes-charts.storage.googleapis.com
1.2 Helm adds a third-party Chart Library
Search the official helm hub chart Library:
[root@k8s-master ~]# helm search hub redis
Helm add a third-party Chart Library:
[root@k8s-master ~]# helm repo add stable https://burdenbear.github.io/kube-charts-mirror "stable" has been added to your repositories
Note:
- helm V3.0 or above has no default source and needs to be added by yourself
- Here is a source found on the Internet. It works very well for the time being
- If the source is not easy to use, you can find an available source at this website. Baidu is looking for. https://www.guojingyi.cn/899.html
See if the source has changed
[root@k8s-master ~]# helm repo list
[root@k8s-master ~]# helm search repo redis
1.4 change helm source
Change helm source [whether to change helm source depends on the actual situation and generally does not need to be modified]
Common commands: 1.remove helm source helm repo remove stable 2.add to helm source helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts helm repo add stable https://burdenbear.github.io/kube-charts-mirror # I use this, which is open source by netizens 3.Update to local helm repo update 4.see helm Current source helm repo list
example:
[root@k8s-master ~]# helm repo remove stable "stable" has been removed from your repositories [root@k8s-master ~]# helm repo list Error: no repositories to show [root@k8s-master ~]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts [root@k8s-master ~]# helm repo update #Update source [root@k8s-master ~]# helm repo list #List source addresses
2. Common application operations of helm
1.list charts All available applications in the warehouse helm search repo 2.Query specified application helm search repo memcached 3.use helm Install package,--name:appoint release name helm install memcached1 stable/memcached 4.View installed packages helm list 5.Deletes the specified reference helm delete memcached1
3. helm common commands
3.1 chart management
- create: creates a new chart based on the given name
- fetch: download the chart from the repository and (optionally) extract it to a local directory
- inspect: chart details
- Package: package the chart directory into a chart archive
- lint: syntax detection
- Verify: verify that the chart at the given path is signed and valid
The above operations can be combined with the following examples, so you can see more details.
3.2 helm example
chart file information
[root@k8s-master my-test-app]# pwd /root/helm/my-test-app [root@k8s-master my-test-app]# cat Chart.yaml apiVersion: v1 appVersion: v2.2 description: my test app keywords: - myapp maintainers: - email: zhang@test.com name: zhang # The name value is the same as the parent directory name name: my-test-app version: v1.0.0 [root@k8s-master my-test-app]# cat values.yaml deployname: my-test-app02 replicaCount: 2 images: repository: httpd tag: latest [root@k8s-master my-test-app]# ll templates/ Total consumption 8 -rw-r--r--. 1 root root 543 9 September 24:27 deployment.yaml [root@k8s-master my-test-app]# cat templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.deployname }} labels: app: mytestapp-deploy spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: mytestapp env: test template: metadata: labels: app: mytestapp env: test description: mytest spec: containers: - name: myapp-pod image: {{ .Values.images.repository }}:{{ .Values.images.tag }} imagePullPolicy: IfNotPresent ports: - containerPort: 80 [root@k8s-master my-test-app]# cat templates/service.yaml apiVersion: v1 kind: Service metadata: name: my-test-app namespace: default spec: type: NodePort selector: app: mytestapp env: test ports: - name: http port: 80 targetPort: 80 protocol: TCP
Generate release
[root@k8s-master my-test-app]# ll Total consumption 8 -rw-r--r--. 1 root root 197 9 September 24:26 Chart.yaml drwxr-xr-x. 2 root root 49 9 September 24:28 templates -rw-r--r--. 1 root root 84 9 September 24:30 values.yaml [root@k8s-master my-test-app]# helm install mytest-app01 .
[root@k8s-master my-test-app]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-test-app02-789dd8465c-9mk8k 1/1 Running 0 7m22s 10.244.1.4 k8s-node1 <none> <none> my-test-app02-789dd8465c-fvcs7 1/1 Running 0 7m22s 10.244.2.4 k8s-node2 <none> <none> [root@k8s-master my-test-app]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h my-test-app NodePort 10.109.118.121 <none> 80:31289/TCP 7m25s
Access test:
Can be accessed
4. chart update and rollback
values.yaml file modification
[root@k8s-master my-test-app]# cat values.yaml
This image was found on daocloud
[root@k8s-master my-test-app]# helm upgrade mytest-app01 . #to update ### If it is in the parent directory, it is helm upgrade mytest-app01 my test app/
Check the pod again
Wait a minute....
[root@k8s-master my-test-app]# helm list #You can see that the update was successful
5, Helm V3.7.0+K8S V1.22.2 actual combat project EFK
Helm is used for k8s cluster installation of EFK log platform
operating system | to configure | address |
---|---|---|
Centos7.4 | 4G 2cpu | 192.168.153.148(k8s-master) |
Centos7.4 | 4G 2cpu | 192.168.153.147(k8s-node1) |
Centos7.4 | 4G 2cpu | 192.168.153.196(k8s-node2) |
Remember to turn off the firewall and selinux. The number of cpu cores should be at least 2
EFK consists of elasticsearch, fluent and Kiabana. Elasticsearch is a distributed search engine that can be used for log retrieval, fluent is a real-time open source data collector, and Kibana is a Web platform that can provide analysis and visualization for elasticsearch. The combination of these three open source tools provides a distributed monitoring system for real-time collection and analysis of log data.
Before that, the industry used ELK(Elasticsearch + Logstash + Kibana) to manage logs. Logstash is a data collection engine with real-time channel capability. However, compared with fluent D, it is slightly inferior in performance and poor in performance. The JVM is easy to lead to high memory usage. Therefore, it is gradually replaced by fluent D, and ELK becomes EFK.
Fluent D is an open source data collector designed for processing data streams, using JSON as the data format. It adopts plug-in architecture, which has high scalability and high availability, and also realizes high reliable information forwarding;
Of course, there are more companies using logstash and filebeat and fewer companies using fluent; But this software is really easy to use; I am also slowly studying;
1. EFK image download
Since the images are all in foreign countries, we download the images in China, and then tag the corresponding image name. Execute the following script [all machines in the cluster execute]:
All nodes operate:
[root@k8s-master ~]# vim download_efk_image.sh #!/bin/sh ##### Execute all machines on both the master node and the worker node # Load environment variables . /etc/profile . /etc/bashrc # Variable setting elasticsearch_iamge="elasticsearch-oss:6.7.0" busybox_image="busybox:latest" bats_image="bats:0.4.0" fluentd_image="fluentd-elasticsearch:v2.3.2" kibana_image="kibana-oss:6.7.0" # Cluster all machine execution # elasticsearch image download docker pull registry.cn-beijing.aliyuncs.com/google_registry/${elasticsearch_iamge} docker tag registry.cn-beijing.aliyuncs.com/google_registry/${elasticsearch_iamge} docker.elastic.co/elasticsearch/${elasticsearch_iamge} docker rmi registry.cn-beijing.aliyuncs.com/google_registry/${elasticsearch_iamge} # busybox image download docker pull registry.cn-beijing.aliyuncs.com/google_registry/${busybox_image} docker tag registry.cn-beijing.aliyuncs.com/google_registry/${busybox_image} ${busybox_image} docker rmi registry.cn-beijing.aliyuncs.com/google_registry/${busybox_image} # bats image download docker pull registry.cn-beijing.aliyuncs.com/google_registry/${bats_image} docker tag registry.cn-beijing.aliyuncs.com/google_registry/${bats_image} dduportal/${bats_image} docker rmi registry.cn-beijing.aliyuncs.com/google_registry/${bats_image} # Fluent d-elastic search image download docker pull registry.cn-beijing.aliyuncs.com/google_registry/${fluentd_image} docker tag registry.cn-beijing.aliyuncs.com/google_registry/${fluentd_image} gcr.io/google-containers/${fluentd_image} docker rmi registry.cn-beijing.aliyuncs.com/google_registry/${fluentd_image} # Kibana OSS image download docker pull registry.cn-beijing.aliyuncs.com/google_registry/${kibana_image} docker tag registry.cn-beijing.aliyuncs.com/google_registry/${kibana_image} docker.elastic.co/kibana/${kibana_image} docker rmi registry.cn-beijing.aliyuncs.com/google_registry/${kibana_image} [root@k8s-master ~]# chmod +x download_efk_image.sh [root@k8s-master ~]# ./download_efk_image.sh
2. Elasticsearch deployment
In this deployment of efk, create an efk namespace.
# Create efk namespace [root@k8s-master ~]# kubectl create namespace efk namespace/efk created [root@k8s-master ~]# kubectl get ns
3. chart download and configuration modification
# current directory [root@k8s-master efk]# pwd /root/k8s_practice/efk # View the ES version. This time, we deploy chart version 1.30.0 and ES version 6.7.0 #Let's take a look at the helm source used here. For subsequent downloads, I use this source, which is temporarily available; [root@k8s-master efk]# helm repo list NAME URL stable https://burdenbear.github.io/kube-charts-mirror [root@k8s-master efk]# helm search repo stable/elasticsearch
[root@k8s-master efk]# helm fetch stable/elasticsearch --version 1.30.0 [root@k8s-master efk]# ls elasticsearch-1.30.0.tgz [root@k8s-master efk]# tar xzf elasticsearch-1.30.0.tgz # Modify profile 1 [root@k8s-master efk]# vim elasticsearch/values.yaml initImage: repository: "busybox" tag: "latest" pullPolicy: "IfNotPresent" #Change Always to IfNotPresent to ensure that it will not Always pull the latest image ================================== client: name: client replicas: 1 serviceType: ClusterIP # Changed from 2 to 1 because it is operated on its own PC and the memory is limited =================================== master: name: master exposeHttp: false replicas: 3 heapSize: "512m" # additionalJavaOpts: "-XX:MaxRAM=512m" persistence: enabled: false # There is no excess PVC, so change from true to false accessMode: ReadWriteOnce name: data =================================== data: name: data exposeHttp: false replicas: 1 # Changed from 2 to 1 because it is operated on its own PC and the memory is limited heapSize: "1024m" # It is changed from 1536m to 1024m because it is operated on its own PC and the memory is limited # additionalJavaOpts: "-XX:MaxRAM=1536m" persistence: enabled: false # There is no excess PVC, so change from true to false accessMode: ReadWriteOnce name: data # Modify profile 2 [root@k8s-master efk]# vim elasticsearch/templates/client-deployment.yaml apiVersion: apps/v1 # Changed from apps/v1beta1 to apps/v1 kind: Deployment ===================================
# Modify Profile 3 [root@k8s-master efk]# vim elasticsearch/templates/data-statefulset.yaml apiVersion: apps/v1 kind: StatefulSet ===================================
# Modify profile 4 [root@k8s-master efk]# vim elasticsearch/templates/master-statefulset.yaml apiVersion: apps/v1 kind: StatefulSet ===================================
4. Elasticsearch deployment
The steps are as follows:
# current directory [root@k8s-master efk]# pwd /root/k8s_practice/efk # Deploy ES [root@k8s-master efk]# helm install es01 --namespace=efk elasticsearch
[root@k8s-master efk]# helm list -n efk
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-1svsRJue-1634543678915)(assets/image-20210924161739290.png)]
[root@k8s-master efk]# kubectl get deploy -n efk
[root@k8s-master efk]# kubectl get pods -n efk
[root@k8s-master efk]# kubectl get service -n efk
[root@k8s-master efk]# kubectl get statefulsets -n efk
ES is deployed as statefullset, mainly because it will have a unique and unchanged domain name ID. later, kibana and fluent only need to configure this domain name ID;
5. Elasticsearch access test
The IP comes from the svc of ES. Therefore, you can access the IP address of the service
[root@k8s-master efk]# curl 10.96.25.151:9200
#ES also has an access method to see its exact status [root@k8s-master efk]# curl 10.96.25.151:9200/_cluster/health?pretty
At this point, Elasticsearch is deployed
6. Elasticsearch client domain name acquisition
This step is just to test whether you can access es through the domain name. If you don't use the domain name, you can use ip
Obtain the domain name of es01 elasticsearch client according to the svc information of es01 elasticsearch client; For the following fluent D and kibana.
Start a pod
[root@k8s-master test]# pwd /root/k8s_practice/test [root@k8s-master test]# cat myapp_demo.yaml apiVersion: v1 kind: Pod metadata: name: myapp-demo namespace: efk labels: k8s-app: myapp spec: containers: - name: myapp image: daocloud.io/library/nginx:1.12.0-alpine #The nslookup command in this image can be used to detect the domain name imagePullPolicy: IfNotPresent ports: - name: httpd containerPort: 80 protocol: TCP
Daocloud.io/library/nginx: I found the image of 1.12.0-alpine on daocloud;
[root@k8s-master test]# kubectl apply -f myapp_demo.yaml [root@k8s-master test]# kubectl get pod -n efk
# View the ip address of the es service, [root@k8s-master test]# kubectl get svc -n efk
# Find out its fixed domain name [root@k8s-master test]# kubectl exec -it myapp-demo sh -n efk / # nslookup 10.96.25.151
In this way, the es fixed domain name is found, so this step is only for this purpose;
7. Fluent get
chart download and configuration modification
[root@k8s-master efk]# pwd /root/k8s_practice/efk [root@k8s-master efk]# helm search repo stable/fluentd-elasticsearch
# Get fluent d-elastic search and unzip it [root@k8s-master efk]# helm fetch stable/fluentd-elasticsearch --version 2.0.7 #If the download fails, just download it a few more times
[root@k8s-master efk]# tar xf fluentd-elasticsearch-2.0.7.tgz
# Modify profile [root@k8s-master efk]# vim fluentd-elasticsearch/values.yaml ### Why use domain name instead of IP? Therefore, each time you restart the svc of ES, the corresponding IP will change. The domain name is unchanged elasticsearch: host: 'es01-elasticsearch-client.efk.svc.cluster.local' # Modification, domain name acquisition, see above port: 9200 scheme: 'http'
8. Fluent d-elastic search deployment
[root@k8s-master efk]# pwd /root/k8s_practice/efk [root@k8s-master efk]# helm install fluentd-es01 --namespace=efk fluentd-elasticsearch # Status view [root@k8s-master efk]# helm list -n efk
[root@k8s-master efk]# kubectl get pod -n efk
9. Kibana get
The major version and major version of kibana must be consistent with elastic search (ES), and the minor version can be different; However, the two versions should be consistent, so as to avoid some accidents caused by different versions.
Because elastic search (ES) uses 6.7.0, kibana also uses this version.
chart download and configuration modification
[root@k8s-master efk]# pwd /root/k8s_practice/efk #View all versions [root@k8s-master efk]# helm search repo stable/kibana
# Get kibana and unzip it [root@k8s-master efk]# helm fetch stable/kibana --version 3.2.7 # If the acquisition fails, obtain it several more times
[root@k8s-master efk]# tar xf kibana-3.2.7.tgz
[root@k8s-master efk]# vim kibana/values.yaml ### Why use domain name instead of IP? Therefore, each time you restart the svc of ES, the corresponding IP will change. The domain name is unchanged files: kibana.yml: ## Default Kibana configuration from kibana-docker. server.name: kibana server.host: "0" elasticsearch.url: http://es01-elasticsearch-client.efk.svc.cluster.local:9200 # modifications. See the above for domain name acquisition =================================== service: type: NodePort # The modified content is changed from ClusterIP to NodePort externalPort: 443 internalPort: 5601 nodePort: 30601 # Where added, Service port range: 30000-32767 # Configuration modification 2 [root@k8s-master efk]# vim kibana/templates/deployment.yaml apiVersion: apps/v1 # Changed from apps/v1beta1 to apps/v1 kind: Deployment metadata: ================================== spec: replicas: {{ .Values.replicaCount }} revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} # Add the following information selector: matchLabels: app: {{ template "kibana.name" . }} release: "{{ .Release.Name }}" # Add information as above
10. kibana deployment
# Deploy kibana OSS # current directory [root@k8s-master efk]# pwd /root/k8s_practice/efk # Deploy kibana OSS [root@k8s-master efk]# helm install kibana01 --namespace=efk kibana # Status view [root@k8s-master efk]# helm list -n efk
[root@k8s-master efk]# kubectl get deploy -n efk NAME READY UP-TO-DATE AVAILABLE AGE es01-elasticsearch-client 1/1 1 1 52m kibana01 1/1 1 1 28m
[root@k8s-master efk]# kubectl get pod -n efk
# View svc information [root@k8s-master efk]# kubectl get svc -n efk -o wide
11. Browser access
[root@k8s-master efk]# kubectl get pod -n efk -o wide
http://192.168.153.147:30601
The following is how to create an index. Here we need to know that the logs collected by fluent D will be sent to ES. We need to know the index name defined
# It must be defined in the configuration file of fluent D. Don't worry. Open it and find it slowly [root@k8s-master efk]# vim fluentd-elasticsearch/values.yaml
So, if you can understand it, you can change it later!!
be accomplished!!!!