You even go to the server to search for logs. Isn't it delicious to set up a log collection system!

abstract

ELK log collection system advanced use, this article mainly explains how to build a real and available online environment log collection system. With it, you can say goodbye to go to the server to search for logs!

ELK environment installation

ELK refers to the log collection system built by Elasticsearch, Kibana and Logstash. For details, please refer to Spring boot application integration ELK implementation log collection . Only the latest version of docker compose script and some installation points are provided here.

Docker compose script

version: '3'
services:
  elasticsearch:
    image: elasticsearch:6.4.0
    container_name: elasticsearch
    environment:
      - "cluster.name=elasticsearch" #Set the cluster name to elasticsearch
      - "discovery.type=single-node" #Start in single node mode
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" #Set the memory size to use the jvm
      - TZ=Asia/Shanghai
    volumes:
      - /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins #Plug in file mount
      - /mydata/elasticsearch/data:/usr/share/elasticsearch/data #Data file mount
    ports:
      - 9200:9200
      - 9300:9300
  kibana:
    image: kibana:6.4.0
    container_name: kibana
    links:
      - elasticsearch:es #You can use the es domain name to access the elastic search service
    depends_on:
      - elasticsearch #kibana starts after elastic search starts
    environment:
      - "elasticsearch.hosts=http://es:9200 "ා set the address to access elasticsearch
      - TZ=Asia/Shanghai
    ports:
      - 5601:5601
  logstash:
    image: logstash:6.4.0
    container_name: logstash
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /mydata/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf #Mount the configuration file of logstash
    depends_on:
      - elasticsearch #kibana starts after elastic search starts
    links:
      - elasticsearch:es #You can use the es domain name to access the elastic search service
    ports:
      - 4560:4560
      - 4561:4561
      - 4562:4562
      - 4563:4563

Key points of installation

  • Use the docker compose command to run all services:
docker-compose up -d
  • The first time you start Elasticsearch, you may find that Elasticsearch cannot be started. That's because the / usr/share/elasticsearch/data directory does not have access rights. You only need to modify the permissions of the / mydata/elasticsearch/data directory and restart it;
chmod 777 /mydata/elasticsearch/data/
  • Logstash needs to install json_lines plug-in.
logstash-plugin install logstash-codec-json_lines
 Copy code

Collect logs by scenario

In order to facilitate our viewing of logs, this paper proposes a concept of collecting logs by scenario, which is divided into the following four types.

  • DEBUG log: the most complete log, including all logs above the DEBUG level in the application, which can only be collected in the development and test environment;
  • ERROR log: only include all ERROR level logs in the application, and only enable collection in all environments;
  • Business log: the log printed under our application's corresponding package can be used to view our own business logs printed in the application;
  • Log: the access record of each interface, which can be used to view the interface execution efficiency and obtain the interface access parameters.

Logback configuration details

To realize the above scenario collection log, it is mainly realized through the Logback configuration. Let's first understand the Logback configuration!

Full configuration

In spring boot, if we want to customize the configuration of Logback, we need to write our own logback-spring.xml File, here is the full configuration we will use this time.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
    <!--Reference default log configuration-->
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <!--Using the default console log output implementation-->
    <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
    <!--apply name-->
    <springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="springBoot"/>
    <!--Log file save path-->
    <property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/>
    <!--LogStash visit host-->
    <springProperty name="LOG_STASH_HOST" scope="context" source="logstash.host" defaultValue="localhost"/>

    <!--DEBUG Log output to file-->
    <appender name="FILE_DEBUG"
              class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!--output DEBUG Above level log-->
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>DEBUG</level>
        </filter>
        <encoder>
            <!--Set as default file log format-->
            <pattern>${FILE_LOG_PATTERN}</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!--Set file naming format-->
            <fileNamePattern>${LOG_FILE_PATH}/debug/${APP_NAME}-%d{yyyy-MM-dd}-%i.log</fileNamePattern>
            <!--Set the log file size, regenerate the file if it exceeds 10 by default M-->
            <maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
            <!--Log file retention days, default 30 days-->
            <maxHistory>${LOG_FILE_MAX_HISTORY:-30}</maxHistory>
        </rollingPolicy>
    </appender>

    <!--ERROR Log output to file-->
    <appender name="FILE_ERROR"
              class="ch.qos.logback.core.rolling.RollingFileAppender">
        <!--Output only ERROR Level log-->
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
        <encoder>
            <!--Set as default file log format-->
            <pattern>${FILE_LOG_PATTERN}</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <!--Set file naming format-->
            <fileNamePattern>${LOG_FILE_PATH}/error/${APP_NAME}-%d{yyyy-MM-dd}-%i.log</fileNamePattern>
            <!--Set the log file size, regenerate the file if it exceeds 10 by default M-->
            <maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
            <!--Log file retention days, default 30 days-->
            <maxHistory>${LOG_FILE_MAX_HISTORY:-30}</maxHistory>
        </rollingPolicy>
    </appender>

    <!--DEBUG Log output to LogStash-->
    <appender name="LOG_STASH_DEBUG" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>DEBUG</level>
        </filter>
        <destination>${LOG_STASH_HOST}:4560</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>Asia/Shanghai</timeZone>
                </timestamp>
                <!--Custom log output format-->
                <pattern>
                    <pattern>
                        {
                        "project": "mall-tiny",
                        "level": "%level",
                        "service": "${APP_NAME:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger",
                        "message": "%message",
                        "stack_trace": "%exception{20}"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
        <!--When there are multiple LogStash When serving, set the access policy to poll-->
        <connectionStrategy>
            <roundRobin>
                <connectionTTL>5 minutes</connectionTTL>
            </roundRobin>
        </connectionStrategy>
    </appender>

    <!--ERROR Log output to LogStash-->
    <appender name="LOG_STASH_ERROR" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
        <destination>${LOG_STASH_HOST}:4561</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>Asia/Shanghai</timeZone>
                </timestamp>
                <!--Custom log output format-->
                <pattern>
                    <pattern>
                        {
                        "project": "mall-tiny",
                        "level": "%level",
                        "service": "${APP_NAME:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger",
                        "message": "%message",
                        "stack_trace": "%exception{20}"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
        <!--When there are multiple LogStash When serving, set the access policy to poll-->
        <connectionStrategy>
            <roundRobin>
                <connectionTTL>5 minutes</connectionTTL>
            </roundRobin>
        </connectionStrategy>
    </appender>

    <!--Business log output to LogStash-->
    <appender name="LOG_STASH_BUSINESS" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>${LOG_STASH_HOST}:4562</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>Asia/Shanghai</timeZone>
                </timestamp>
                <!--Custom log output format-->
                <pattern>
                    <pattern>
                        {
                        "project": "mall-tiny",
                        "level": "%level",
                        "service": "${APP_NAME:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger",
                        "message": "%message",
                        "stack_trace": "%exception{20}"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
        <!--When there are multiple LogStash When serving, set the access policy to poll-->
        <connectionStrategy>
            <roundRobin>
                <connectionTTL>5 minutes</connectionTTL>
            </roundRobin>
        </connectionStrategy>
    </appender>

    <!--Interface access log output to LogStash-->
    <appender name="LOG_STASH_RECORD" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>${LOG_STASH_HOST}:4563</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>Asia/Shanghai</timeZone>
                </timestamp>
                <!--Custom log output format-->
                <pattern>
                    <pattern>
                        {
                        "project": "mall-tiny",
                        "level": "%level",
                        "service": "${APP_NAME:-}",
                        "class": "%logger",
                        "message": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
        <!--When there are multiple LogStash When serving, set the access policy to poll-->
        <connectionStrategy>
            <roundRobin>
                <connectionTTL>5 minutes</connectionTTL>
            </roundRobin>
        </connectionStrategy>
    </appender>

    <!--Control frame output log-->
    <logger name="org.slf4j" level="INFO"/>
    <logger name="springfox" level="INFO"/>
    <logger name="io.swagger" level="INFO"/>
    <logger name="org.springframework" level="INFO"/>
    <logger name="org.hibernate.validator" level="INFO"/>

    <root level="DEBUG">
        <appender-ref ref="CONSOLE"/>
        <!--<appender-ref ref="FILE_DEBUG"/>-->
        <!--<appender-ref ref="FILE_ERROR"/>-->
        <appender-ref ref="LOG_STASH_DEBUG"/>
        <appender-ref ref="LOG_STASH_ERROR"/>
    </root>

    <logger name="com.macro.mall.tiny.component" level="DEBUG">
        <appender-ref ref="LOG_STASH_RECORD"/>
    </logger>

    <logger name="com.macro.mall" level="DEBUG">
        <appender-ref ref="LOG_STASH_BUSINESS"/>
    </logger>
</configuration>

Key points of configuration

Use default log configuration

In general, we do not need to customize the console output. We can use the default configuration. For specific configuration, refer to console-appender.xml , which is under spring boot - ${version}. Jar.

<! -- Reference defau lt log configuration -- >
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<! -- using the defau lt console log output implementation -- >
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>

springProperty

The tag can get configuration properties from the configuration file of SpringBoot. For example, in different environments, our Logstash service address is different. We can define the address in the application.yml To use.

For example, in application-dev.yml These properties are defined in:

logstash:
  host: localhost

In logback-spring.xml You can use this directly in:

<!--apply name-->
<springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="springBoot"/>
<!--LogStash visit host-->
<springProperty name="LOG_STASH_HOST" scope="context" source="logstash.host" defaultValue="localhost"/>

filter

There are two different filters in Logback to filter the log output.

Threshold filter: threshold filter, which filters out the logs below the specified threshold. For example, the following configuration will filter out all logs below the INFO level.

<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    <level>INFO</level>
</filter>

Level filter: level filter, which filters according to the log level. For example, the following configuration will filter out all logs of non ERROR level.

<filter class="ch.qos.logback.classic.filter.LevelFilter">
    <level>ERROR</level>
    <onMatch>ACCEPT</onMatch>
    <onMismatch>DENY</onMismatch>
</filter>

appender

Appender can be used to control the output form of logs. There are three main types.

  • Console appender: controls the form of log output to the console, such as in console-appender.xml The default console output defined in.
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
	<encoder>
		<pattern>${CONSOLE_LOG_PATTERN}</pattern>
	</encoder>
</appender>
  • RollingFileAppender: controls the form of log output to files. It can control the generation strategy of log files, such as the file name format, how large the regenerated files are, and how many days the files are deleted.
<!--ERROR Log output to file-->
<appender name="FILE_ERROR"
          class="ch.qos.logback.core.rolling.RollingFileAppender">
    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
        <!--Set file naming format-->
        <fileNamePattern>${LOG_FILE_PATH}/error/${APP_NAME}-%d{yyyy-MM-dd}-%i.log</fileNamePattern>
        <!--Set the log file size, regenerate the file if it exceeds 10 by default M-->
        <maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
        <!--Log file retention days, default 30 days-->
        <maxHistory>${LOG_FILE_MAX_HISTORY:-30}</maxHistory>
    </rollingPolicy>
</appender>
  • LogstashTcpSocketAppender: controls the form of log output to Logstash. It can be used to configure the address, access policy and log format of Logstash.
<!--ERROR Log output to LogStash-->
<appender name="LOG_STASH_ERROR" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>${LOG_STASH_HOST}:4561</destination>
    <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
        <providers>
            <timestamp>
                <timeZone>Asia/Shanghai</timeZone>
            </timestamp>
            <!--Custom log output format-->
            <pattern>
                <pattern>
                    {
                    "project": "mall-tiny",
                    "level": "%level",
                    "service": "${APP_NAME:-}",
                    "pid": "${PID:-}",
                    "thread": "%thread",
                    "class": "%logger",
                    "message": "%message",
                    "stack_trace": "%exception{20}"
                    }
                </pattern>
            </pattern>
        </providers>
    </encoder>
    <!--When there are multiple LogStash When serving, set the access policy to poll-->
    <connectionStrategy>
        <roundRobin>
            <connectionTTL>5 minutes</connectionTTL>
        </roundRobin>
    </connectionStrategy>
</appender>

logger

Only the appender configured on the logger node will be used. Logger is used to configure which conditions the log will be printed. root is a special appender. The following describes the conditions for log partition.

  • DEBUG log: all logs above DEBUG level;
  • ERROR log: all ERROR level logs;
  • Business log: com.macro.mall All logs above DEBUG level under the package;
  • Log: com.macro.mall.tiny.component.WebLogAspect Class, which is an AOP aspect class for statistics of interface access information.

Control frame output log

There are also some logs inside the framework. The DEBUG level logs are not useful for us. They can be set to the INFO level or above.

<!--Control frame output log-->
<logger name="org.slf4j" level="INFO"/>
<logger name="springfox" level="INFO"/>
<logger name="io.swagger" level="INFO"/>
<logger name="org.springframework" level="INFO"/>
<logger name="org.hibernate.validator" level="INFO"/>

Logstash configuration details

Next, we need to configure Logstash so that it can collect different logs by scenario. The following details the configuration used.

Full configuration

input {
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4560
    codec => json_lines
    type => "debug"
  }
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4561
    codec => json_lines
    type => "error"
  }
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4562
    codec => json_lines
    type => "business"
  }
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4563
    codec => json_lines
    type => "record"
  }
}
filter{
  if [type] == "record" {
    mutate {
      remove_field => "port"
      remove_field => "host"
      remove_field => "@version"
    }
    json {
      source => "message"
      remove_field => ["message"]
    }
  }
}
output {
  elasticsearch {
    hosts => ["es:9200"]
    action => "index"
    codec => json
    index => "mall-tiny-%{type}-%{+YYYY.MM.dd}"
    template_name => "mall-tiny"
  }
}

Key points of configuration

  • input: use different ports to collect different types of logs, and open four ports from 4560 to 4563;
  • filter: for log of record type, directly convert message in JSON format to source for easy search and view;
  • output: Customize index format by type and time.

SpringBoot configuration

The configuration in spring boot can be directly used to override the configuration in Logback, such as logging.level.root You can override the level configuration in the < root > node.

  • Development environment configuration: application-dev.yml
logstash:
  host: localhost
logging:
  level:
    root: debug
  • Test environment configuration: application-test.yml
logstash:
  host: 192.168.3.101
logging:
  level:
    root: debug
  • Production environment configuration: application-prod.yml
logstash:
  host: logstash-prod
logging:
  level:
    root: info

Kibana advanced use

After the above ELK environment is built and configured, our log collection system can finally be used. Here are some tips for using in Kibana!

  • First, start our test Demo, and then call the general interface (Swagger can be used) to generate some log information;

 

 

 

  • After calling, you can create Index Patterns in management - > Kibana - > Index Patterns. The Kibana service access address is: http://192.168.3.101:5601

 

 

 

  • After creation, you can view all logs in Discover. Debugging logs only need to view the logs in the mal tiny debug * mode directly;

 

 

 

  • For log search, kibana has a very powerful prompt function, which can be opened through the Options button on the right side of the search bar;

 

 

 

  • To record logs, you only need to directly view the logs in the mal tiny record * mode. If you want to search the logs with uri of / brand/listAll, you only need to enter uri in the search field: "/ brand/listAll";

 

 

 

  • For the error log, just view the log in the mal tiny error * mode directly;

 

 

 

  • For business logs, you only need to directly view the logs in the mal tiny business * mode. Here we can view the output of some SQL logs;

 

 

 

  • If the log is too large, you can delete it through elastic search - > index management.

 

 

 

Project source address

github.com/macrozheng/...


By MacroZheng
Link: https://juejin.im/post/5eef217d51882565d74fb4eb
 

Tags: ElasticSearch xml Spring codec

Posted on Mon, 22 Jun 2020 02:32:07 -0400 by ChrisF79