So fragrant log dynamic level and output, are you sure not to come in and have a look? ——Dynamic input log level and file of production environment

This article is a series, welcome to update

This article is full of energy. Please be prepared before eating. Please remember to like it after reading.

Preface

In the last article, we discussed the performance of the log and the advantages and disadvantages of the log. A friend said that I didn't have any dry goods, so I decided to have a dry goods, so that my friends could see the real strength of the staff. We discussed how the production environment can dynamically adjust the log level according to one request, one user, or even output independent files. This article has a lot of code. I suggest you run it.

Imagine a demand: the customer service sister reported that there was a BUG found on the user's line. You used the log to analyze it. A 200M log file had a big head, but you didn't find any useful information at last. Finally, you decided to open the DEBUG log, restart the project, and let the user operate it again. As a result, the online disk started for two minutes and gave an alarm, so you were criticized by the operation and maintenance personnel In order to stop the production environment, the BOSS has been a large number of meals.

Try to be a product manager and ask for some requirements:

  • Set a small goal first, can we adjust the level dynamically without stopping?
  • Can we dynamically adjust the log level for a user or even a request?
  • Let us enter the world of the strong, not only according to the dynamic level of users, can we temporarily output a file according to users and requests?

After sorting out the above requirements, let's try to solve them one by one.

Practice I dynamically adjust the log level (small target realization)

This requirement is very simple. Presumably, everyone will also know that Logback and Log4j2 both implement the hot loading of log monitoring log files natively, which is very simple to use, and only needs to be modified in the configuration file. Of course, in order to confuse (e) users, the framework author uses different configuration methods:

<!-- logback To configure -->
<configuration scan="true" scanPeriod="30 seconds" > 
  ...
</configuration> 

<!-- log4j2 To configure -->
<Configuration monitorInterval="30">
...
</Configuration>

More powerful dynamic adjustment of log output level in actual combat II

Such a simple realization can't meet our needs very well. We decided to make further efforts. After all, we are people who have pursuit.

Let's break down the problem:

  • If we want to dynamic level according to users, we must distinguish users;
  • To distinguish the log level according to the request, we need to identify the request;
  • To dynamically adjust the log level, we need to analyze the implementation of the log framework;

Let's solve the problem one by one:

  • To distinguish users, we need to decompose the problem. There are two main types of Web projects. The old project will use Session, and the new project will use Token:
  • Session: session is convenient. Just put it into a log level of session.
  • Token: token is not complicated either. It provides a solution that can put token into cache, such as Redis and database (it is not recommended to store database).
  • Differentiation request:
  • Provide users with buttons, cooperate with single page application, and attach a parameter to all HTTP requests. This method is dangerous and not suitable for Internet projects.
  • Special requirements, in the development process code written to the code.

Distinguish user best practices, open functions to administrators, list online users, and click users to select the log level output by users.

  • Dynamic adjustment of log level is the most difficult part for us. After analyzing the Logback API, we can find that ch.qos.logback.classic.Logger has provided the "setLevel" method, which seems to be able to change the output level of the log during operation. However, the small partners who read my previous article must know that the Logger instance is recommended to be static final, so the Logger instance is multi-threaded sharing Yes, if we change the level of the Logger, it will definitely pollute other requests and other users. It seems that this is not feasible. How to solve it?

    When there is no way, there are only two routes. Read the source code and browse the documents. Let's start with the source code. Let's first analyze how Logback decides whether to output a log or not:

    We randomly output a log.debug("A debug Log"); the breakpoint follows in, and we find that the real judgment logic is in filterandlog_'or3plus. The source code is as follows:

    private void filterAndLog_0_Or3Plus(final String localFQCN, 
                                      final Marker marker, 
                                      final Level level, 
                                      final String msg, 
                                      final Object[] params,
                                      final Throwable t) {
    // The magic way
    final FilterReply decision = loggerContext
      .getTurboFilterChainDecision_0_3OrMore(marker, this, level, 
                                             msg, params, t);
    // If the magic method above returns NEUTRAL To determine the log level
    if (decision == FilterReply.NEUTRAL) {
      if (effectiveLevelInt > level.levelInt) {
        return;
      }
      // Return DENY It doesn't judge the log level at all
    } else if (decision == FilterReply.DENY) {
      return;
    }
    // If it can be executed here, the log will be output
    buildLoggingEventAndAppend(localFQCN, marker, 
                               level, msg, params, t);
    }
     

    After analyzing the source code, we have come to the conclusion that the priority of the return value of the magic method is higher than that of the log level judgment, which is interesting. Can we customize that magic method? Continue to follow in and find that TurboFilterList inherits copyonwritearraylist < turbofilter >, which is a List in itself, and the logic of TurboFilterList ා getturbofilterchaindecision is Loop itself to get FilterReply and return. Look at the code together:

    public FilterReply getTurboFilterChainDecision(final Marker marker, 
                                                 final Logger logger, 
                                                 final Level level, 
                                                 final String format, 
                                                 final Object[] params,
                                                 final Throwable t) {
    
    final int size = size();
    // If there is only one, the result will be returned directly. If there is an exception, the result will be returned directly NEUTRAL
    // size It can't be 0, because the caller has already judged it. It's not shown here
    if (size == 1) {
      try {
        TurboFilter tf = get(0);
        return tf.decide(marker, logger, level, 
                         format, params, t);
      } catch (IndexOutOfBoundsException iobe) {
        return FilterReply.NEUTRAL;
      }
    }
    // If there are more than one, loop to get the first one is not NEUTRAL The result of is returned, others will not be judged
    Object[] tfa = toArray();
    final int len = tfa.length;
    for (int i = 0; i < len; i++) {
      // for (TurboFilter tf : this) {
      final TurboFilter tf = (TurboFilter) tfa[i];
      final FilterReply r = tf
        .decide(marker, logger, level, format, params, t);
      if (r == FilterReply.DENY || r == FilterReply.ACCEPT) {
        return r;
      }
    }
    return FilterReply.NEUTRAL;
    }
     

Why should the author separate the logic of one turbofilter from that of multiple turbofilters? Pay attention to the public number reply to TurboFilter to get the answer (there is an additional parsing as WeChat fans' welfare).

To sort out the idea, every log, regardless of its level, will eventually enter the filterandlog ﹐ 0 ﹐ or3plus method to judge whether it is output, and its judgment will give priority to the return value of TurboFilterList ﹐ getTurboFilterChainDecision. TurboFilter is used in getTurboFilterChainDecision. Following the source code, we find that TurboFilterList is empty. We can almost assert that it is empty We will not create an empty List for no reason. It must be used to extend it to us. We will go through the document and find the extension method. Look at the TurboFilters section of the document. We can find the following description:

TurboFilters

TurboFilter objects all extend the TurboFilter abstract class. Like the regular filters, they use ternary logic to return their evaluation of the logging event.

Overall, they work much like the previously mentioned filters. However, there are two main differences between Filter and TurboFilter objects.

TurboFilter objects are tied to the logging context. Hence, they are called not only when a given appender is used, but each and every time a logging request is issued. Their scope is wider than appender-attached filters.

More importantly, they are called before the LoggingEvent object creation. TurboFilter objects do not require the instantiation of a logging event to filter a logging request. As such, turbo filters are intended for high performance filtering of logging events, even before the events are created.

It's about the same as what we analyzed. TurboFilter can also control whether the log can be output, and the priority is higher than the ordinary Filter, which is not surprising. After all, Turbo.

An example is given in the document. Let's define our own TurboFilter.

/**
 * DynamicLoggingFilter
 *
 * @author jiyanwai
 * @date 2020-01-15 16:09:16
 */
@Slf4j
public class DynamicLoggingFilter extends TurboFilter {

  public static final String DEBUG_HEADER_NAME = "X-Debug";
  public static final String DEBUG_SESSION_KEY = DEBUG_HEADER_NAME;

  /**
   * Return values FilterReply.DENY, FilterReply.NEUTRAL or FilterReply.ACCEPT
   * If it is DENY, it will not output any more
   * Direct output if ACCEPT
   * If it is NEUTRAL, come to the log output judgment logic
   *
   * @return FilterReply.DENY,FilterReply.NEUTRAL Or FilterReply.ACCEPT
   */
  @Override
  public FilterReply decide(Marker marker, Logger logger, Level level,
                            String format, Object[] params, Throwable t) {

    // We can filter our own logger
    boolean isMyLogger = logger.getName().startsWith("com.jiyuanwai");
    if (!isMyLogger) {
      return FilterReply.NEUTRAL;
    }

    RequestAttributes requestAttributes = RequestContextHolder
        .getRequestAttributes();

    // When the project starts or runs the timer, it may not RequestAttributes
    if (requestAttributes == null) {
      return FilterReply.NEUTRAL;
    }

    // First judge RequestHeader,Used to differentiate threads
    ServletRequestAttributes servletRequestAttributes =
        (ServletRequestAttributes) requestAttributes;
    // According to the request parameters, the actual production environment can develop functions for managers, put the unique user identifier into the cache or session
    HttpServletRequest request = servletRequestAttributes.getRequest();
    String debug = request.getHeader(DEBUG_HEADER_NAME);
    boolean debugBoolean = Boolean.parseBoolean(debug);
    if (debugBoolean) {
      return FilterReply.ACCEPT;
    }

    // Re judgement Session
    HttpSession session = request.getSession(false);
    if (session == null) {
      return FilterReply.NEUTRAL;
    }
    Object attribute = session.getAttribute(DEBUG_SESSION_KEY);
    debugBoolean = Boolean.parseBoolean((String) attribute);
    if (debugBoolean) {
      return FilterReply.ACCEPT;
    }

    return FilterReply.NEUTRAL;
  }

}

The log configuration file is adjusted as follows:

<configuration scan="false" debug="false">
    <turboFilter class="com.jiyuanwai.logging.extend.dynamic.DynamicLoggingFilter" />
    ...
    <root level="error">
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>

 

 

Write a method to test

@GetMapping
public void debugLevel() {
  log.debug("A debug Log");
}

Here we recommend you to use the built-in HTTP Request of Idea:

HTTP Request

# Test RequestHeader usage
GET http://localhost:8082/log/level
Accept: */*
# Through the Header, avoid polluting the request parameters (otherwise there will be problems with file upload)
X-Debug: true
Cache-Control: no-cache

###

After execution, you can see in the console:

22:50:19.816 DEBUG [082-exec-5] c.j.l.e.c.LogController       (  35) - A debug Log

We have finished dynamically adjusting the log according to the request.

Continue to test the Session. Here I will simply deal with it. Just add the Controller as a test. The online user management function can be implemented in coordination with the online user management function, and the partners can expand on demand. If you have any difficulty, you can leave a message, and I will collect your difficulties and solve them in a unified way.

@PutMapping
public void startDebugBySession(HttpSession session) {
  //It is only for testing. Online functions need to be developed to obtain the session of the corresponding user, and then put the attributes
  session.setAttribute(DynamicLoggingFilter.DEBUG_SESSION_KEY, "true");
}

Open the Session level debug request as follows

# Open the current user DEBUG mode with Session
PUT http://localhost:8082/log/level
Accept: */*
Cache-Control: no-cache

###
# Remove the custom Header and test again
GET http://localhost:8082/log/level
Accept: */*
Cache-Control: no-cache

###

give the result as follows

09:28:16.662 DEBUG [082-exec-1] c.j.l.e.c.LogController       (  40) - A debug Log
There is almost no difference between Tocken and Session, so it will not be shown here.

At this point, we have gracefully implemented the dynamic log output according to the request and the user. We can be proud of it.

The world of the three powerful in actual combat

Let's implement the last requirement, dynamic output log file.

The world of the strong

Let's first analyze the requirements. To distinguish the methods that users can use according to their temporary output files, we need to solve the problem that how do we output specific users to specific files?

All friends who have read the previous articles of Yuanwai know that it is the Appender who is really responsible for outputting logs. We have priority to turn over the documents to see if there is an official solution. After a hard nibbling on the documents, we find that SiftingAppender, together with the discriminator, and then with MDC, seems to be able to solve our needs. SiftingAppender can separate log events according to user sessions, so it can be used differently User generated logs can enter different log files, or even one log file per user. Discriminator can be used to identify, but what is MDC?

"Mapped Diagnostic Context" is essentially a map maintained by the logging framework where the application code provides key-value pairs which can then be inserted by the logging framework in log messages. MDC data can also be highly helpful in filtering messages or triggering certain actions.

MDC is a Map maintained by the log framework, which can be used to filter and trigger operations. Two best practices are summarized:

  • Read the previous article's small partners (not read quickly to pay attention to the public number, click on the past review - log series to see) sure remember my super SessionId and RequestId, how to achieve? MDC, of course, see below.
  • In addition, the discriminator can output files dynamically with MDC. Let's try it.

Implementation idea, use Filter and execute first, directly look at the code

/**
 * LogFilter For processing dynamic logs
 *
 * @author jiyanwai
 * @date 2020-01-16 06:05:19
 */
@Slf4j
@Component
@Order(Integer.MIN_VALUE)
@WebFilter("/*")
public class LogFilter implements Filter {

  public static final String REQUEST_ID_MDC_KEY = "requestId";
  public static final String SESSION_ID_MDC_KEY = "sessionId";
  public static final String STAND_ALONE_HEADER_KEY = "X-StandAlone-File";
  public static final String STAND_ALONE_SESSION_KEY = STAND_ALONE_HEADER_KEY;

  @Override
  public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
      throws IOException, ServletException {

    try {
      try {
        HttpServletRequest request = (HttpServletRequest) servletRequest;
        // requestId And super SessionId,Can be customized according to the needs
        String requestId = CommonUtils.generateRequestId();
        MDC.put(REQUEST_ID_MDC_KEY, requestId);
        String sessionId = request.getRequestedSessionId();
        MDC.put(SESSION_ID_MDC_KEY, sessionId);

        // Only when the independent file output is turned on can the MDC
        String standAlone = request.getHeader(STAND_ALONE_HEADER_KEY);
        if (standAlone == null) {
          standAlone = (String) request.getSession().getAttribute(STAND_ALONE_SESSION_KEY);
        }
        if (standAlone != null) {
          // You can customize different paths here, sessionId Will be spliced to the file name, as modified below xml
          MDC.put(STAND_ALONE_SESSION_KEY, sessionId);
        }

      } catch (Exception e) {
        // Handle exceptions here without affecting business
        log.error("Error handler dynamic log", e);
      }
      // Continue execution, do not deal with others Filter abnormal
      filterChain.doFilter(servletRequest, servletResponse);
    } finally {
      // Remember to clean up the environment
      MDC.clear();
    }
  }
}

 

 

Add SiftAppender to profile

<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
  <discriminator>
    <key>X-StandAlone-File</key>
    <!-- MDC Not available, default“ all" -->
    <defaultValue>all</defaultValue>
  </discriminator>
  <sift>
    <!-- Appender Name Need independence -->
    <appender name="FILE-${X-StandAlone-File}" class="ch.qos.logback.core.FileAppender">
      <!-- Path and filename of the log file being logged -->
      <!-- Output position, ${X-StandAlone-File}take MDC -->
      <file>${LOG_PATH}/${X-StandAlone-File}.log</file>
      <!-- Rolling strategy for loggers, by date, by size -->
      <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
        <!-- Rolling by day -->
        <fileNamePattern>${LOG_PATH}/%d{yyyy-MM}/log-%d{yyyy-MM-dd-HH}-%i.${X-StandAlone-File}.gz</fileNamePattern>
        <!-- Single file up to 50 M -->
        <maxFileSize>50MB</maxFileSize>
        <!--  Up to 5 G Disk space, 500 files (no more than 5 in total G) -->
        <maxHistory>500</maxHistory>
        <totalSizeCap>5GB</totalSizeCap>
      </rollingPolicy>
      <!-- Log by appending method -->
      <append>true</append>
      <!-- Format of log file -->
      <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
        <pattern>${FILE_LOG_PATTERN}</pattern>
        <charset>utf-8</charset>
      </encoder>
    </appender>
  </sift>
  <root level="error">
    <!-- For the convenience of testing, output to the console at the same time -->
    <appender-ref ref="STDOUT"/>
    <appender-ref ref="SIFT"/>
  </root>
</appender>

Adjust the adding method of Controller

/**
    * Enable debug, test use and production environment for current users. Please cooperate with related functions of online user management
    *
    * @param session session
    */
@PutMapping
public void startDebugBySession(HttpSession session,
                                @RequestParam(required = false) boolean standAlone) {
  // For testing only, online functions need to be developed to obtain corresponding users session,Then put the properties
  session.setAttribute(DynamicLoggingFilter.DEBUG_SESSION_KEY, "true");
  if (standAlone) {
    session.setAttribute(LogFilter.STAND_ALONE_SESSION_KEY, "");
  }
}

First use the Header mode to test how the file structure will be if the independent output is not turned on

#Test the use of RequestHeader
GET http://localhost:8082/log/level
Accept: */*
#Through the Header, avoid polluting the request parameters (otherwise there will be problems with file upload)
X-Debug: true
Cache-Control: no-cache

###

Dependent output file

The content is as follows. You can see that our RequestId and supersessionid have been successfully written to the log

2020-01-16 09:54:05.827 DEBUG [http-nio-8082-exec-1] c.j.l.extend.controller.LogController    --- requestId=ee77a879576147bcafdcb745ba5767d3, sessionId=21F4DE2AADBA675F2135601D760BF18E : A debug Log

Use Http Request to test and open debug and separate file

# Enable the current user DEBUG mode with Session
PUT http://localhost:8082/log/level
Accept: */*
Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded

standAlone=true

###

Test log output

# Test Session mode usage
GET http://localhost:8082/log/level
Accept: */*
Cache-Control: no-cache

###

Independent output file

extend

Previously, we used the log framework to solve the dynamic level natively. The problem is that it's inconvenient to modify. We need to connect the online server to use vim to modify. As mentioned above, Logback provides API s that can be directly hot modified. Let's see:

/**
    * Modify individual log levels
    *
    * @param loggerName Log instance name
    * @param level      level
    */
@PostMapping
public void changeLoggingLevel(String loggerName,
                               @RequestParam(required = false, 
                                             defaultValue = "DEBUG") String level) {
  ch.qos.logback.classic.Logger logger =
    (ch.qos.logback.classic.Logger) LoggerFactory.getLogger(loggerName);
  logger.setLevel(Level.toLevel(level));
}

Test it:

# Test adjust global log level
POST http://localhost:8082/log/level
Accept: */*
Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded

loggerName=ROOT

###

If we want to restore:

/**
    * Reset log level to profile default
    */
@DeleteMapping
public void restoreLoggingLevel() {
  LoggerContext context = (LoggerContext) LoggerFactory.getILoggerFactory();

  try {
    JoranConfigurator configurator = new JoranConfigurator();
    configurator.setContext(context);
    context.reset();
    configurator.doConfigure(new ClassPathResource("logback.xml").getInputStream());
  } catch (JoranException | IOException ignore) {
    // StatusPrinter will handle this
  }
  StatusPrinter.printInCaseOfErrorsOrWarnings(context);
}
Continue testing
# Test restore log level to profile level
DELETE http://localhost:8082/log/level
Accept: */*
Cache-Control: no-cache

###

Concluding remarks

In this article, we have realized the dynamic output level and file according to the request and user level. Now, we should be able to meet 99% of the scenarios. Only Logback has been used outside the staff. If the readers want to use Log4j2, they can try to implement it by themselves according to their own ideas. If they like it more than 1000, I can also implement it for you. In the next article, I will do some pre article Corrigendum, reader's problem feedback, and There are a few best practices, welcome to update.

Finally, I will post a mind map of the analysis process to help you learn to analyze and solve problems.

The above is a personal view. If there is any problem or error, please leave a message to discuss and correct. The code is not easy. If you think it's well written, please pay attention, ask for approval and ask for forwarding.

Scan code attention to the public number, the first time to update.

Code: https://github.com/jiyunwai/logging-extend

Tags: Java Session Database Attribute xml

Posted on Thu, 16 Jan 2020 08:30:01 -0500 by strangermaster