ELK log analysis system how to quickly build a simple ELK log analysis system

ELK log analysis system how to quickly build a simple ELK log analysis system

1, ELK introduction

  ELK is an excellent, open source component used to build a real-time log analysis platform. ELK is Elasticsearch , Logstash and Kiabana are acronyms for the three open source frameworks. Through these three components, a unified log management system is built to collect logs scattered on different devices in the distributed deployment system for subsequent log analysis.
  among them, Elasticsearch is a real-time distributed search and analysis engine, which is based on Apache Lucene, a full-text search engine. It is written in Java language. It has the functions of distribution, high availability, easy expansion, automatic segmentation of replica and index, REST style API based on HTTP protocol and JSON as data interaction format, multiple data sources Real time analysis and storage.
   logstack is mainly used to collect and filter logs, format data, and transmit the collected logs to relevant systems for storage. Logstash is developed in Ruby language and consists of three parts: data input, filter and data output. The data input end can collect data from the data source, and the filter is the data processing layer, including data formatting, data type conversion, data filtering, etc., and supports regular expressions; The data output terminal outputs the data collected by logstash to other systems after processing through filters, such as Kafka, HDFS, Elasticsearch, etc.
  Kibana is an open source analysis and visualization platform for Elasticsearch, developed using node.js, which can be used to search and display the data stored in Elasticsearch. At the same time, it provides rich chart templates, which can easily carry out advanced data analysis and draw various charts through simple configuration.
  in fact, in the actual ELK system, you also need to configure a log collector, such as filebeans. It is installed on each server that needs to collect logs and sends the logs to Logstash for processing, so Beats is a "Porter" to move your logs to the log collection server. This blog will not consider such components for the time being.

2, Need environment

1. Close the firewall (open the corresponding port in the production environment)
2. For JDK environment, please refer to CentOS7 environment installation jdk, tomcat and their configuration environment variables Blog.

3, Elasticsearch installation

1. Download Elasticsearch. Select Elasticsearch 6.3.1 here. The command is as follows:

#Current directory: / usr/local/soft/ELKB/
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.1.tar.gz
 
  • 1
  • 2

2. Decompress

#Current directory: / usr/local/soft/ELKB/
 tar -zvxf elasticsearch-6.3.1.tar.gz
 
  • 1
  • 2

3. Modify profile
    for the introductory tutorial, based on the principle of using the default configuration as much as possible, modify the configuration file / usr/local/soft/ELKB/elasticsearch-6.3.1//config/elasticsearch.yml, and add the following configuration:

network.host: 192.168.1.8
http.port: 9200
 
  • 1
  • 2

Where, / usr/local/soft/ELKB/elasticsearch-6.3.1 / refers to the extracted root directory of elasticsearch-6.3.1, which can be used according to the actual situation.
3. Create a user to start elasticsearch (elasticsearch cannot be started with root user)

adduser elkb
 # Change the owner and group of es to elk
chown -R elkb:elkb /usr/local/soft/ELKB/elasticsearch-6.3.1/
 
  • 1
  • 2
  • 3

4. Switch users (elkb) and start elasticsearch

#Switch users
su elkb
#Start elasticsearch. The current directory is / usr/local/soft/ELKB/elasticsearch-6.3.1/
bin/elasticsearch
 
  • 1
  • 2
  • 3
  • 4

bin/elasticsearch -d runs in the background to avoid shutting down the ES if the client is turned off. However, the log will not be printed. It is recommended to use it in a formal environment.

5. Verify
visit http://192.168.1.8:9200/ , the following interface appears, indicating that the startup is successful.

4, Kibana installation

1. Download Kibana. Select Kibana 6.3.1 (synchronized with elasticsearch). The command is as follows:

#Current directory: / usr/local/soft/ELKB/
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.3.1-linux-x86_64.tar.gz
 
  • 1
  • 2

2. Decompress

#Current directory: / usr/local/soft/ELKB/
tar -zvxf kibana-6.3.1-linux-x86_64.tar.gz
 
  • 1
  • 2

3. Modify profile
Modify the configuration file / usr/local/soft/ELKB/kibana-6.3.1-linux-x86_64/config/kibana.yml file, add the following configuration:

# Kibana port
server.port: 5601
# Kibana ip
server.host: "192.168.1.8"
# elasticsearch address
elasticsearch.url: "http://192.168.1.8:9200"
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

4. Start

#Start kebana. The current directory is / usr / local / soft / elkb / kibana-6.3.1-linux-x86_ sixty-four
bin/kibana
 
  • 1
  • 2

5. Verify
visit http://192.168.1.8:5601 , the following interface appears, indicating success.

5, Logstash installation

1. Download Logstash. Select Logstash 6.3.1 (synchronized with elasticsearch). The command is as follows:

#Current directory: / usr/local/soft/ELKB/
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.1.tar.gz
 
  • 1
  • 2

2. Decompress

#Current directory: / usr/local/soft/ELKB/
tar -zxvf logstash-6.3.1.tar.gz
 
  • 1
  • 2

3. Verify
In / usr/local/soft/ELKB/logstash-6.3.1 directory, execute bin/logstash -e 'input {stdin {}} output {stdout {}}', and then enter hello. The following interface description appears. Logstash can be used normally.

#Current directory: / usr/local/soft/ELKB/logstash-6.3.1
[root@node08 logstash-6.3.1]# bin/logstash -e 'input { stdin { } } output { stdout {} }'
hello #
Sending Logstash's logs to /usr/local/soft/ELKB/logstash-6.3.1/logs which is now configured via log4j2.properties
[2020-05-14T13:23:50,509][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/local/soft/ELKB/logstash-6.3.1/data/queue"}
[2020-05-14T13:23:50,534][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/local/soft/ELKB/logstash-6.3.1/data/dead_letter_queue"}
[2020-05-14T13:23:51,495][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-14T13:23:51,699][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"95edb338-9c94-47d4-b357-4406ffa6ba33", :path=>"/usr/local/soft/ELKB/logstash-6.3.1/data/uuid"}
[2020-05-14T13:23:52,846][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.1"}
[2020-05-14T13:23:55,728][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-05-14T13:23:56,398][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x55098452 run>"}
The stdin plugin is now waiting for input:
[2020-05-14T13:23:56,532][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
{
       "message" => "hello",
      "@version" => "1",
    "@timestamp" => 2020-05-14T05:23:56.445Z,
          "host" => "node08"
}
[2020-05-14T13:23:57,375][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

6, Example

   through the above steps, the installation of three components has been completed. The following is an example of reading nginx access log and writing it to ES to ensure that there is / usr/local/nginx/logs/access.log access log (the directory can be arbitrary) as the source of logstash collection logs.

1. Configure logstash collection and processing logs

In the / usr/local/soft/ELKB/logstash-6.3.1 directory, create a job directory (the directory can be customized at will), and then enter the directory to create an nginx.conf file. The contents are as follows:

input {#Configure log sources and use fixed files
  file{
        path => "/usr/local/nginx/logs/access.log"
        start_position => "beginning"
        type => "nginx_access_log"
  }
}
 
filter {
  if [type] == "nginx" { # The type here is the log type
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG} %{QS:gzip_ratio}" } # Use the built-in pattern, and pay attention to the spaces
    }
    # Change the data type of the matched field
    mutate {
      convert => ["response", "integer"]
      convert => ["bytes", "integer"]
      convert => ["responsetime", "float"]
    }
    # Specify the timestamp field and the specific format
    date {
      match => ["timestamp", "dd/MMM/YYYY:HH:mm:ss Z"]
      remove_field => ["timestamp"]
    }
  }
} 
output {#Configure elasticsearch as the output destination
  elasticsearch {
    hosts => [ "192.168.1.8:9200" ]
    index => "%{type}-%{+YYYY.MM.dd}" # The index contains a timestamp
  }
}
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

2. Restart logstash according to the above nginx.conf file. The command is as follows:

#Under the current directory: / usr/local/soft/ELKB/logstash-6.3.1, execute the following command
# bin/logstash -f job/nginx.conf
 
  • 1
  • 2

Note: the execution is slow and needs to wait.

3. Configure the Mapping information of ES Index
By executing the following command.

curl -H "Content-Type: application/json" -XPUT 192.168.1.8:9200/_template/nginx -d '
{
  "template": "nginx*",
  "mappings": {
    "_default_": {
      "properties": {
        "clientip": {
          "type": "keyword"
        },
        "ident": {
          "type": "keyword"
        },
        "auth": {
          "type": "keyword"
        },
        "verb": {
          "type": "keyword"
        },
        "request": {
          "type": "keyword"
        },
        "httpversion": {
          "type": "keyword"
        },
        "rawrequest": {
          "type": "keyword" 
        },
        "response": {
          "type": "keyword"
        },
        "bytes": {
          "type": "integer"
        },
        "referrer": {
          "type": "keyword"
        },
        "agent": {
          "type": "keyword"
        },
        "gzip_ratio": {
          "type": "keyword"
        }
      }
    }
  }
}'
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46

4. Viewing Elasticsearch index data using Kibana

  1. Through access[ http://192.168.1.8:5601 ]Open the Kibana interface and find the Management menu, as shown in the following figure:
  2. Create a matching rule (regular expression) for the index and click next
  3. Create an index, select "I don't want to use the Time Filter", and then click the Create button.
  4. Click create to enter the following page, indicating that the creation is successful
  5. Click the Disconer menu. At this time, you can see the data in the nginx access log, as shown below.

7, Write at the end

    in this blog post, we simply demonstrated how to deploy the three components in ELK and configured an example of simply reading nginx access logs. In the next article, we will introduce the FileBeats component to truly dynamically read log files on multiple servers.
  in another blog post, you specifically recorded the problems and solutions encountered during the installation and deployment of ELK. Please refer to Common problems and solutions in building ELK log analysis system.

 

ELK log analysis system how to quickly build a simple ELK log analysis system ELK log analysis system how to quickly build a simple ELK log analysis system ELK log analysis system how to quickly build a simple ELK log analysis system ELK log analysis system how to quickly build a simple ELK log analysis system ELK log analysis system A simple ELK log analysis system ELK log analysis system how to quickly build a simple ELK log analysis system ELK log analysis system how to quickly build a simple ELK log analysis system

Tags: ELK

Posted on Sun, 05 Dec 2021 19:41:30 -0500 by running_out_of_imagination