1, ELK overview
-Elastic search: responsible for log retrieval and storage
-Logstash: responsible for the collection, analysis and processing of logs
-Kibana: responsible for the visualization of logs
-ELK is a complete set of solutions. It is the acronym of three software products, which are used by many companies, such as Sina, Ctrip, Huawei, meituan, etc
-These three kinds of software are open source software, which are usually used together, and belong to Elastic.co company, ELK for short
ELK components can be used to solve problems in the operation and maintenance of massive log system
-Centralized query and management of distributed log data
-System monitoring, including the monitoring of system hardware and application components
-Troubleshooting
-Security information and event management
-Report function
2, Elasticsearch
1. Overview of elasticsearch
Elasticsearch is a Lucene based search server. It provides a distributed multi-user full-text search engine, elastic search, a Web interface based on restful API
It is an open source software developed in Java and using Apache license terms. It is a popular enterprise search engine. Designed for cloud computing, it can achieve real-time search, stable, reliable, fast, easy to install and use
main features
-Real time analysis, document oriented, distributed real-time file storage
-High availability, easy to expand, support cluster, Shards and replication
-Friendly interface and support JSON
-No typical transaction
-Is a document oriented database
Noun interpretation
Node: a node with an ES server installed
Cluster: a cluster composed of multiple nodes
Document: - basic information units that can be searched
Index: a collection of documents with similar characteristics
Type: one or more types can be defined in an index
File: is the smallest unit of ES, equivalent to a - column of data
Shards: index Shards. Each Shard is a Shard
Replicas: copy of index
Comparison with relational database
2. Preparation for deploying Elasticsearch
1) Configure yum warehouse on springboard machine
Copy the cloud disk RPM package / elk directory to the springboard machine
[root@ecs-proxy public]# ls elk/ elasticsearch-6.8.8.rpm logs.jsonl.gz filebeat-6.8.8-x86_64.rpm logstash-6.8.8.rpm head.tar.gz metricbeat-6.8.8-x86_64.rpm kibana-6.8.8-x86_64.rpm [root@ecs-proxy ~]# cp -a elk /var/ftp/localrepo/elk [root@ecs-proxy ~]# cd /var/ftp/localrepo/ [root@ecs-proxy localrepo]# createrepo --update .
2) Purchase virtual machine
host | IP address | to configure |
---|---|---|
es-0001 | 192.168.1.41 | Minimum configuration: 2-core 2G |
es-0002 | 192.168.1.42 | Minimum configuration: 2-core 2G |
es-0003 | 192.168.1.43 | Minimum configuration: 2-core 2G |
es-0004 | 192.168.1.44 | Minimum configuration: 2-core 2G |
es-0005 | 192.168.1.45 | Minimum configuration: 2-core 2G |
3. Install a stand-alone board elasticsearch
[root@es-0001 ~]# vim /etc/hosts 192.168.1.41 es-0001 [root@es-0001 ~]# yum install -y java-1.8.0-openjdk elasticsearch [root@es-0001 ~]# vim /etc/elasticsearch/elasticsearch.yml 55: network.host: 0.0.0.0 [root@es-0001 ~]# systemctl enable --now elasticsearch
test
[root@es-0001 ~]# curl http://192.168.1.41:9200/ { "name" : "War Eagle", "cluster_name" : "elasticsearch", "version" : { "number" : "2.3.4", "build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f", "build_timestamp" : "2016-06-30T11:24:31Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" }
4. Install the cluster board elasticsearch
status: green
-Cluster status, green is normal
-Yellow indicates a problem but not very serious, and red indicates a serious fault
number_of_nodes:5
-Indicates the number of nodes in the cluster
number_of_data_nodes:5
-Number of nodes used to store data
es-0001... es-0005 all hosts must perform the following operations
[root@es-0001 ~]# vim /etc/hosts 192.168.1.41 es-0001 192.168.1.42 es-0002 192.168.1.43 es-0003 192.168.1.44 es-0004 192.168.1.45 es-0005 [root@es-0001 ~]# yum install -y java-1.8.0-openjdk elasticsearch [root@es-0001 ~]# vim /etc/elasticsearch/elasticsearch.yml 17: cluster.name: my-es #Cluster name 23: node.name: es-0001 # Native hostname 55: network.host: 0.0.0.0 #Listening address, here are all 68: discovery.zen.ping.unicast.hosts: ["es-0001", "es-0002"] #founder [root@es-0001 ~]# systemctl enable --now elasticsearch
test
[root@es-0001 ~]# curl http://192.168.1.41:9200/_cluster/health?pretty { "cluster_name" : "my-es", "status" : "green", "timed_out" : false, "number_of_nodes" : 5, "number_of_data_nodes" : 5, ... ... }
Note: the cluster is the same for users, so the ip accessed can be the ip of any node
3, Cluster profile
1. Overview of head plug-in
-It shows the topology of the ES cluster and can be used for index and node level operations
-It provides a - set of query API s for clusters and returns the results in json and tabular form
-It provides some shortcut menus to show various states of the cluster
2. Schematic diagram of plug-in
Purchase virtual machine
host | IP address | to configure |
---|---|---|
web | 192.168.1.48 | Minimum configuration: 1 core 1G |
3. Install the plug-in
Install apache and deploy the head plug-in
[root@web ~]# yum install -y httpd [root@web ~]# tar zxf head.tar.gz [root@web ~]# mv elasticsearch-head /var/www/html/head [root@web ~]# systemctl enable --now httpd Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
4.es-0001 access authorization
[root@es-0001 ~]# vim /etc/elasticsearch/elasticsearch.yml
#The configuration file is appended at the end http.cors.enabled : true http.cors.allow-origin : "*" http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length [root@es-0001 ~]# systemctl restart elasticsearch.service
5. Test
Use Huawei ELB to publish the 80 port of web service and the 9200 port of es-0001 to the Internet and access them through a browser
1) Purchase ELB and purchase ip and bind
! [insert picture description here]( https://img-blog.csdnimg.cn/b206534b800e4cafa38d686f9621cd01.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBAa2FsaV95YW8=,size_20,color_FFFFFF,t_70,g_se,x_16
2) Add two monitors (809200)
3) Listen to the web and Head plug-ins respectively
4) Browser access connection plug-in
Note: * is the master, which is automatically assigned
4, Elasticsearch access request plugin (curl)
Simple management overview of cluster API s
1. Browser access
Elastic icsearch is accessed using the http protocol
The http request consists of three parts
-They are: request line, message header and request body
-Request line: method request URI HTTP -- version CRLF
http request method
-Common methods: GET, POST, HEAD
-Other methods OPTIONS, PUT, DELETE, TRACE and CONNECT
2. Command line access
Request method used by elasticseal Ch
Add PUT
Delete - delete
Change - POST
Query GET
The data interacting with elastic icsearch shall be in json format
3.curl command access
In linux, curl is a file transfer tool that uses URL rules to work on the command line. It can be said to be a very powerful http command line tool. It supports a variety of request modes, custom request headers and other powerful functions. It is a comprehensive tool
Use format:
curl -X request method http: / / request address
curI -H custom request header http: / / request address
4.curl usage
_cat keyword is used to query cluster status, node information, etc. - display detailed information (? v) and help information (? he lp)
#Query supported keywords [root@es-0001 ~]# curl -XGET http://es-0001:9200/_cat/ #Check the specific information [root@es-0001 ~]# curl -XGET http://es-0001:9200/_cat/master #Show details? v [root@es-0001 ~]# curl -XGET http://es-0001:9200/_cat/master?v #Show help information? Help [root@es-0001 ~]# curl -XGET http://es-0001:9200/_cat/master?help
5. Use PUT method to create index
Specify the name of the index, specify the number of slices, and specify the number of copies
The PUT method is used to create the index. After the index is created, it is verified by the head plug-in
[ root@es -0001 ~]# curl -XPUT -H "Content-Type: application/json" http://es-0001:9200/tedu -D \ '{"Settings": {"index": {"number_of_shards": 5, # data is divided into several pieces "number_of_replicas": 1 # several copies}}}'
6. Add data
[root@es-0001 ~]# curl -XPUT -H "Content-Type: application/json" http://es-0001:9200/tedu/teacher/1 -d \ '{ "occupation": "poet", "name": "Li Bai", "title": "Shixian", "years": "Tang" }'
7. Query data
[root@es-0001 ~]# curl -XGET http://es-0001:9200/tedu/teacher/1?pretty
8. Modify data
[root@es-0001 ~]# curl -XPOST -H "Content-Type: application/json" \ http://Es-0001:9200 / Tedu / teacher / 1 / _update - D '{"Doc": {"year": "701"}}'
9. Delete data
# Delete one [root@es-0001 ~]# curl -XDELETE http://es-0001:9200/tedu/teacher/1 # Delete index [root@es-0001 ~]# curl -XDELETE http://es-0001:9200/tedu
5, kibana
1. Introduction to kibana
What is kibana
-Data visualization platform tools
characteristic:
-Flexible analysis and visualization platform
-Charts summarizing traffic and data in real time
-Display an intuitive interface for different users
-Instant sharing and embedded dashboards
2.kibana installation configuration
1) # purchase virtual machine
host | IP address | to configure |
---|---|---|
kibana | 192.168.1.46 | Minimum configuration: 1 core 1G |
2) Install kibana
[root@kibana ~]# vim /etc/hosts 192.168.1.41 es-0001 192.168.1.42 es-0002 192.168.1.43 es-0003 192.168.1.44 es-0004 192.168.1.45 es-0005 192.168.1.46 kibana [root@kibana ~]# yum install -y kibana [root@kibana ~]# vim /etc/kibana/kibana.yml 02 server.port: 5601 #port 07 server.host: "0.0.0.0" #Allow access to ip 28 elasticsearch.hosts: ["http://es-0002:9200", "http://es-0003:9200 "] # default web address 37 kibana.index: ".kibana" #Indexes 40 kibana.defaultAppId: "home" #Default home menu 113 i18n.locale: "zh-CN" #language [root@kibana ~]# systemctl enable --now kibana
3. Bind the elastic public IP and verify through the WEB browser
ip has been bound in front
##6, kibana charting
1. Overview of importing data
Data import conditions:
-json format must be specified content - type: application I on / json
-Import Keywords: _bulk
- HTTP method: POST
2. Import log data test
Copy the cloud disk / var/ftp/localrepo/elk/logs.jsonl.gz to the springboard
[root@ecs-proxy ~]# cd /var/ftp/localrepo/elk/ [root@ecs-proxy ~]# cp logs.jsonl.gz /root/ [root@ecs-proxy ~]# gunzip logs.jsonl.gz [root@ecs-proxy ~]# curl -XPOST -H "Content-Type: application/json" http://192.168.1.41:9200/_bulk --data-binary @logs.jsonl
3. Insert picture description here to view
4. Draw flow chart
1) Choose to use Elasticsearch
These operations are not required if there are real-time logs
2) Another analysis diagram
Pie chart
You can also select subgraphs