k8s deployment elasticsearch cluster

Environmental preparation The k8s and ceph environments we use are as follows:https://blog.51cto.com/leejia/2495558https...
Environmental preparation

The k8s and ceph environments we use are as follows:
https://blog.51cto.com/leejia/2495558
https://blog.51cto.com/leejia/2499684

Introduction to ECK

Elastic Cloud on Kubernetes, a new orchestration product based on the Kubernetes Operator model, allows users to configure, manage, and run Elastic search clusters on Kubernetes.ECK's vision is to provide a SaaS-like experience for Elastic products and solutions on Kubernetes.

The ECK is built using the Kubernetes Operator model, needs to be installed in the Kubernetes cluster, is deployed, and focuses more on simplifying all post-run tasks:

  • Manage and monitor multiple clusters
  • Easy upgrade to new version
  • Expand or reduce cluster capacity
  • Change cluster configuration
  • Dynamically scale local storage
  • backups

Kubernetes is currently the leader in container layouts, and the Elastic community has released an ECK to make Elasticsearch easier to run on the cloud and to add to cloud native technology and keep up with the times.

Deploy ECK

Deploy the ECK and see if the logs are working:

# kubectl apply -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml # kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

After a few minutes to see if the elastic-operator is working properly, there is only one elastic-operator pod in the ECK:

# kubectl get pods -n elastic-system NAME READY STATUS RESTARTS AGE elastic-operator-0 1/1 Running 1 2m55s
Deploying an elasticsearch cluster using ceph persistent storage using ECK

For our test scenario, we used one master node and one data node to deploy the cluster, and the production environment recommended using 3 + master nodes.In the following manifest, the heap size of the instance, the available memory of the container, and the virtual machine memory of the container are configured to adjust to the needs of the cluster:

# vim es.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: 7.7.1 nodeSets: - name: master-nodes count: 1 config: node.master: true node.data: false podTemplate: spec: initContainers: - name: sysctl securityContext: privileged: true command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] containers: - name: elasticsearch env: - name: ES_JAVA_OPTS value: -Xms1g -Xmx1g resources: requests: memory: 2Gi limits: memory: 2Gi volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: rook-ceph-block - name: data-nodes count: 1 config: node.master: false node.data: true podTemplate: spec: initContainers: - name: sysctl securityContext: privileged: true command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] containers: - name: elasticsearch env: - name: ES_JAVA_OPTS value: -Xms1g -Xmx1g resources: requests: memory: 2Gi limits: memory: 2Gi volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: rook-ceph-block # kubectl apply -f es.yaml

Over time, see the status of the elasticsearch cluster

# kubectl get pods quickstart-es-data-nodes-0 1/1 Running 0 54s quickstart-es-master-nodes-0 1/1 Running 0 54s # kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green 2 7.7.1 Ready 73s

Looking at the status of the pv, we can see that the requested PV has been created and successfully bound:

# kubectl get pv pvc-512cc739-3654-41f4-8339-49a44a093ecf 10Gi RWO Retain Bound default/elasticsearch-data-quickstart-es-data-nodes-0 rook-ceph-block 9m5s pvc-eff8e0fd-f669-448a-8b9f-05b2d7e06220 5Gi RWO Retain Bound default/elasticsearch-data-quickstart-es-master-nodes-0 rook-ceph-block 9m5s

The default cluster has basic authentication turned on, the user name is elastic, and the password can be obtained through secret.The default cluster also enables https access for self-signed certificates.We can access elasticsearch through service resources:

# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-es-data-nodes ClusterIP None <none> <none> 4m10s quickstart-es-http ClusterIP 10.107.201.126 <none> 9200/TCP 4m11s quickstart-es-master-nodes ClusterIP None <none> <none> 4m10s quickstart-es-transport ClusterIP None <none> 9300/TCP 4m11s # kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo # curl https://10.107.201.126:9200 -u 'elastic:J1fO9bu88j8pYK8rIu91a73o' -k { "name" : "quickstart-es-data-nodes-0", "cluster_name" : "quickstart", "cluster_uuid" : "AQxFX8NiTNa40mOPapzNXQ", "version" : { "number" : "7.7.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423", "build_date" : "2020-05-28T16:30:01.040088Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }

Expand a data node without stopping: modifyEs.yamlThe value of count in data-nodes is 2, then apply Es.yamlThat's it.

# kubectl apply -f es.yaml # kubectl get pods quickstart-es-data-nodes-0 1/1 Running 0 24m quickstart-es-data-nodes-1 1/1 Running 0 8m22s quickstart-es-master-nodes-0 1/1 Running 0 24m # kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green 3 7.7.1 Ready 25m

Without stopping, shrink a data node and automatically synchronize the data: modifyEs.yamlValue of count in data-nodes is 1, then apply Es.yamlThat's it.

Docking kibana

Since the default kibana also enables https access to self-signed certificates, we can choose to turn it off, so we deploy kibana using ECK:

# vim kibana.yaml apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: quickstart spec: version: 7.7.1 count: 1 elasticsearchRef: name: quickstart http: tls: selfSignedCertificate: disabled: true # kubectl apply -f kibana.yaml # kubectl get pods NAME READY STATUS RESTARTS AGE quickstart-es-data-nodes-0 1/1 Running 0 31m quickstart-es-data-nodes-1 1/1 Running 1 15m quickstart-es-master-nodes-0 1/1 Running 0 31m quickstart-kb-6558457759-2rd7l 1/1 Running 1 4m3s # kubectl get kibana NAME HEALTH NODES VERSION AGE quickstart green 1 7.7.1 4m27s

Add a four-tier proxy to ingress for kibana to provide external access services:

# vim tsp-kibana.yaml apiVersion: k8s.nginx.org/v1alpha1 kind: GlobalConfiguration metadata: name: nginx-configuration namespace: nginx-ingress spec: listeners: - name: kibana-tcp port: 5601 protocol: TCP --- apiVersion: k8s.nginx.org/v1alpha1 kind: TransportServer metadata: name: kibana-tcp spec: listener: name: kibana-tcp protocol: TCP upstreams: - name: kibana-app service: quickstart-kb-http port: 5601 action: pass: kibana-app # kubectl apply -f tsp-kibana.yaml

The default kibana user name for accessing elasticsearch is elastic, and the password is obtained as follows

# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

Access kibana through a browser:

Delete ECK related resources

Delete elasticsearch, kibana, and ECK

# kubectl get namespaces --no-headers -o custom-columns=:metadata.name \ | xargs -n1 kubectl delete elastic --all -n # kubectl delete -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml
Docking cerebro

Install the package management tool helm for the Kubernetes application first.Helm is a YAML file that encapsulates the native Kubernetes application and allows you to customize some metadata of the application when you deploy it. Helm relies on charts to distribute the application on k8s.Helm and chart implement the following main functions:

  • Application Encapsulation
  • version management
  • Dependency Check
  • Application Distribution
    # wget https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz # tar -zxvf helm-v3.0.0-linux-amd64.tar.gz # mv linux-amd64/helm /usr/local/bin/helm # helm repo add stable https://kubernetes-charts.storage.googleapis.com

    Install cerebro through helm:

    # helm install stable/cerebro --version 1.1.4 --generate-name

    View the status of cerebro:

    # kubectl get pods|grep cerebro cerebro-1591777586-7fd87f7d48-hmlp7 1/1 Running 0 11m

    Since the default ECK deployment of elasticsearch opens the https service for self-signed certificates, you can ignore https certificate authentication in the cerebro configuration (or add a ca certificate of a self-signed certificate to cerebro to identify the self-signed certificate), and restart cerebro:
    1, export cerebro's configmap:

    # kubectl get configmap cerebro-1591777586 -o yaml > cerebro.yaml

    2. Replace the hosts associated with cerebro in configmap with the following configuration (where quickstart-es-http is the service resource name of elasticsarch):

    play.ws.ssl.loose.acceptAnyCertificate = true hosts = [ { host = "https://quickstart-es-http.default.svc:9200" name = "k8s elasticsearch" } ]

    3. Apply cerebro's configmap and restart cerebro pod:

# kubectl apply -f cerebro.yaml # kubectl get pods|grep cerebro cerebro-1591777586-7fd87f7d48-hmlp7 1/1 Running 0 11m # kubectl get pod cerebro-1591777586-7fd87f7d48-hmlp7 -o yaml | kubectl replace --force -f -

First confirm cerebro's service resources, then configure ingress to add a seven-tier proxy to cerebro:

# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cerebro-1591777586 ClusterIP 10.111.107.171 <none> 80/TCP 19m # vim cerebro-ingress.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: cerebro-ingress spec: rules: - host: cerebro.myk8s.com http: paths: - path: / backend: serviceName: cerebro-1591777586 servicePort: 80 # kubectl apply -f cerebro-ingress.yaml

Add the host binding "172.18.2.175" to the / etc/hosts file of the local PCCerebro.myk8s".com" and then access through the viewer:

Delete cerebro

# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION cerebro-1591777586 default 1 2020-06-10 16:26:30.419723417 +0800 CST deployed cerebro-1.1.4 0.8.4 # heml delete name cerebro-1591777586
Reference resources

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-http-configuration.html
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
https://hub.helm.sh/charts/stable/cerebro
https://www.elastic.co/cn/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond
https://helm.sh/docs/intro/install/

10 June 2020, 13:00 | Views: 6661

Add new comment

For adding a comment, please log in
or create account

0 comments