CentOS 8 Deployment ELK Log Analysis Platform

demand

1. Developers cannot log on to online servers to view logs
2. Each system has logs, so log fragmentation is difficult to find
3. Large amount of log data, slow lookup, inadequate real-time data

Solution: Deploy ELK Platform

Introduction to ELK

ELK is the abbreviation of three open source software, namely Elasticsearch, Logstash, Kibana. They are all open source software.A new FileBeat is a lightweight Log Collection Processing Tool (Agent), which takes up less resources and is suitable for transferring to Logstash after searching for logs on various servers.

ELK Schema Diagram

Introduction to Elasticsearch:

Elasticsearch is an open source distributed search engine that provides three main functions: collecting, analyzing and storing data.
Features: Distributed, zero configuration, automatic discovery, index auto-slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load, etc.

Deploy Elasticsearch

1. Configure yum source

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch #Import Key
vim /etc/yum.repos.d/elasticsearch.repo #Configure yum source

[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enable=1

2. Install elasticsearch

yum install elasticsearch -y #Install elasticsearch

3. Configure Elasticsearch

vim /etc/elasticsearch/elasticsearch.yml

cluster.name: yltx    #17-line cluster name
node.name: node1   #23-line node name
path.data: /data/es-data   #33 Lines Working Directory
path.logs: /var/log/elasticsearch  #37-line log directory
bootstrap.memory_lock: true    #43 rows to prevent swap partitions
network.host: 0.0.0.0    #54-line monitoring network
http.port: 9200   #58-line port

mkdir -p /data/es-data
chown -R elasticsearch:elasticsearch /data/es-data/

4. Memory unlocking and file restrictions

Modifications must be made in the production environment (note)

vim /etc/security/limits.conf

End Insert
elasticsearch soft memlock unlimited   
elasticsearch hard memlock unlimited   
* soft nofile 65535        
* hard nofile 65535

systemctl start elasticsearch.service #Start Services
netstat -ntap | grep 9200
ps -ef |grep elasticsearch

Web test: http://192.168.0.102:9200/

Install the Elasticsearch - head plug-in

/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

Web access:

http://192.168.0.102:9200/_plugin/head/

Logstash introduces:

Logstash is a tool for collecting, analyzing and filtering logs, which supports a large number of data acquisition methods.The client side is installed on the host that needs to collect logs. The server side is responsible for filtering, modifying and so on the logs received from each node and sending them to elastic search concurrently.
logstash Log Collection Basic Process: input-->codec-->filter-->codec-->output
1.input: Where to collect logs from.
2.filter: Filter before sending
3.output: Output to Elasticsearch or Redis message queue
4.codec: Output to the foreground for easy testing while practicing
5. Logs with small amounts of data are collected monthly

Deploy Logstash

1. Configure yum source

vim /etc/yum.repos.d/logstash.repo

[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enable=1

2. Download and install logstash

yum install logstash -y

Test logstash

Basic syntax for logstash

input {
Specify Input
}

output {
Specify Output
}

1. Test Standard Input and Output

Use rubydebug for foreground output demonstration and testing
/opt/logstash/bin/logstash -e 'input { stdin {} } output { stdout { codec => rubydebug} }'
hello #Input hello test

2. Test output to file

/opt/logstash/bin/logstash -e 'input { stdin {} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log"} }'
cat /tmp/test-2020.02.17.log

3. Turn on log compression

/opt/logstash/bin/logstash -e 'input { stdin {} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip => true } }'
ll /tmp/

4. Test output to elasticsearch

/opt/logstash/bin/logstash -e 'input { stdin {} } output { elasticsearch { hosts => ["192.168.0.102:9200"] index => "logstash-test-%{+YYYY.MM.dd}" } }'
ll /data/es-data/yltx/nodes/0/indices


5. Web Page Verification

Introduction to Kibana

Kibana is also an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to help summarize, analyze, and search important data logs.

Kibana Deployment

1. Download and install kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-linux-x86_64.tar.gz
tar zxvf kibana-7.6.0-linux-x86_64.tar.gz -C /opt/
mv /opt/kibana-7.6.0-linux-x86_64/ /usr/local/kibana

2. Modify Configuration

vim /usr/local/kibana/config/kibana.yml

server.port: 5601           #2-line access port
server.host: "0.0.0.0"   #5-line monitoring network
elasticsearch.url: "http://192.168.0.102:9200 "#12 lines ES address
kibana.index: ".kibana"    #20 lines 

3. Start the service

/usr/local/kibana/bin/kibana &
netstat -ntap |grep 5601 #View port number

4. Web page validation:

http://192.168.0.102:5601/


Test ELK Platform

Collect system logs and java exception logs

1. Modify the logstash configuration file:

vim /root/file.conf

input {
            file {
                    path => "/var/log/messages"     #Collect System Logs
                    type => "system"
                    start_position => "beginning"
            }
            file {
                    path => "/var/log/elasticsearch/yltx.log"   #Collect java exception logs
                    type => "es-error"
                    start_position => "beginning"
                    codec => multiline {
                    pattern => "^\["
                    negate => true
                    what => "previous"
                }
            }
}

output {

         if [type] == "system" {
                 elasticsearch {
                         hosts => ["192.168.0.102:9200"]  
                         index => "system-%{+YYYY.MM.dd}"  
                 }
         }

         if [type] == "es-error" {
                 elasticsearch {
                         hosts => ["192.168.0.102:9200"]
                         index => "es-error-%{+YYYY.MM.dd}"
                 }
         }
}

2. Write to elastic search

/opt/logstash/bin/logstash -f /root/file.conf

3. View Elasticsearch

4. View Kibana


Related Data

ELK website: https://www.elastic.co/cn/
Chinese Guide: https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details

Tags: Linux ElasticSearch yum vim codec

Posted on Mon, 17 Feb 2020 19:30:24 -0500 by gentusmaximus