The monitoring stat of the Druid connection pool caused a memory leak

Alibaba's Druid connection pool can monitor sql. However, the monitoring information will be stored in memory, which will cause memory leakage in some scenarios.

1. Causes

When an online machine gives an alarm (high heap memory utilization), log in to the server and dump the heap for analysis:

image.png

Discovery: sqlStatMap comparison in JdbcDataSourceStat consumes memory.

Because Druid starts stat monitoring, sql information will be stored in the Map, occupying memory and causing memory leakage.

stat monitoring sql information page: you can see that sql information will be held.

image.png

Of course, someone raised this question on Issues of Druid in github. Each sql statement will hold references for a long time to speed up the frequency of FullGC.

2. Practical analysis

The source code of storing sql information into sqlStatMap is as follows:

public class JdbcDataSourceStat implements JdbcDataSourceStatMBean {
    private final LinkedHashMap<String, JdbcSqlStat>            sqlStatMap;

    public JdbcSqlStat createSqlStat(String sql) {
        lock.writeLock().lock();
        try {
            JdbcSqlStat sqlStat = sqlStatMap.get(sql);
            if (sqlStat == null) {
                sqlStat = new JdbcSqlStat(sql);
                sqlStat.setDbType(this.dbType);
                sqlStat.setName(this.name);
                sqlStatMap.put(sql, sqlStat);
            }

            return sqlStat;
        } finally {
            lock.writeLock().unlock();
        }
    }
}

We found that if the sql (key) is the same, it will not be put into the Map. What is the key like?

After local debug analysis:

image.png

As you can see, sql does not carry parameters. It is the original sql information.

image.png

However, during analysis, it is found that the key s stored in sqlStatMap seem to be an sql???

After analysis, it is found that this sql is a batch statement!

Case recurrence: when the number of batch operation parameters is different, the key s for sqlStatMap are different.

image.png

Analysis conclusion: for batch operation, due to the different number of parameters, there is a large amount of data stored in sqlStatMap.

3. SpringBoot2.x will automatically turn on the stat of Druid

Some students found that the stat configuration was not enabled in the configuration file of their SpringBoot project, but the above phenomenon still occurred.

It should be noted that spring boot 2. X can automatically assemble Druid s. And stat monitoring will be started automatically.

public class DruidFilterConfiguration {

    @Bean
    @ConfigurationProperties(FILTER_STAT_PREFIX)
    @ConditionalOnProperty(prefix ="spring.datasource.druid.filter.stat", name = "enabled", matchIfMissing = true)
    @ConditionalOnMissingBean
    public StatFilter statFilter() {
        return new StatFilter();
    }
}

matchIfMissing = true means that if spring.datasource.druid.filter.stat=true is not configured, the Bean will be loaded.

Solution: use spring.datasource.druid.filter.stat=false in the Configuration class, or configure the bean StatFilter in your own Configuration.

Of course, the monitoring console will also be started automatically:

@ConditionalOnWebApplication
@ConditionalOnProperty(name = "spring.datasource.druid.stat-view-servlet.enabled", havingValue = "true", matchIfMissing = true)
public class DruidStatViewServletConfiguration {
    @Bean
    public ServletRegistrationBean statViewServletRegistrationBean(DruidStatProperties properties) {
        DruidStatProperties.StatViewServlet config = properties.getStatViewServlet();
        ServletRegistrationBean registrationBean = new ServletRegistrationBean();
        registrationBean.setServlet(new StatViewServlet());
        registrationBean.addUrlMappings(config.getUrlPattern() != null ? config.getUrlPattern() : "/druid/*");
        if (config.getAllow() != null) {
            registrationBean.addInitParameter("allow", config.getAllow());
        }
        if (config.getDeny() != null) {
            registrationBean.addInitParameter("deny", config.getDeny());
        }
        if (config.getLoginUsername() != null) {
            registrationBean.addInitParameter("loginUsername", config.getLoginUsername());
        }
        if (config.getLoginPassword() != null) {
            registrationBean.addInitParameter("loginPassword", config.getLoginPassword());
        }
        if (config.getResetEnable() != null) {
            registrationBean.addInitParameter("resetEnable", config.getResetEnable());
        }
        return registrationBean;
    }
}

Accessible http://ip: Port / druid/sql.html view the console. The default password is not configured.

4. Solutions

Druid's monitoring and statistics function is implemented through the filter chain extension. If you want to open the monitoring and statistics function and configure StatFilter, see here for details: https://github.com/alibaba/druid/wiki/ Disposition_ StatFilter

4.1 scheme 1: close the stat of Druid directly

Explicitly use spring.datasource.druid.filter.stat=false in the configuration file.

4.2 scheme 2: enable sql consolidation

There are many sql statements with duplicate structures. You can start sql consolidation. For example, if the sqlStatMap is too large due to batch operation, this scheme can be adopted.

LinkedHashMap < string, jdbcsqlstat > sqlStatMap for SQL monitoring uses SQL statements as keys. To solve the problem that the above batch processing leads to a large number of SQL stored in sqlStatMap, you can start SQL consolidation.

image.png

image.png

You can see that only the structure of sql is retained and the parameters of sql are ignored.

image.png

Solution:
Or by adding the JVM parameter configuration:

-Ddruid.stat.mergeSql=true
perhaps

spring:
    druid:
      connectionProperties: druid.stat.mergeSql=true

perhaps

@Configuration
public class DruidConfig {

    @Bean
    public StatFilter statFilter() {
        StatFilter statFilter = new StatFilter();
        statFilter.setMergeSql(true);
        return statFilter;
    }
}

4.3 scheme 3: control the size of sqlStatMap

If there are business requirements, you can't merge SQL or merge SQL without much effect (there are not many SQL statements with duplicate structure). You can also set druid.stat.sql.MaxSize instead of SQL merging (1000 by default).

Source code:

public class JdbcDataSourceStat implements JdbcDataSourceStatMBean {

        sqlStatMap = new LinkedHashMap<String, JdbcSqlStat>(16, 0.75f, false) {

            protected boolean removeEldestEntry(Map.Entry<String, JdbcSqlStat> eldest) {
                boolean remove = (size() > maxSqlSize);

                if (remove) {
                    JdbcSqlStat sqlStat = eldest.getValue();
                    if (sqlStat.getRunningCount() > 0 || sqlStat.getExecuteCount() > 0) {
                        skipSqlCount.incrementAndGet();
                    }
                }

                return remove;
            }
        };
}

LinkedHashMap has a removeEldestEntry(Map.Entry eldest) method. By overriding this method, certain conditions are added, and true is returned if the conditions are met. When the put in new value method returns true, the oldest key and value in the map are removed.

sqlStatMap overrides the removeEldestEntry method to control the maximum number.

Solution:
Or by adding the JVM parameter configuration:
-Ddruid.stat.sql.MaxSize=100

perhaps

spring:
    druid:
      connectionProperties: druid.stat.sql.MaxSize=100

Recommended reading

Brutally plotted by DruidDataSource and Mybatis, resulting in OOM

Tags: Front-end Druid Spring Visual Studio Code

Posted on Tue, 30 Nov 2021 02:11:15 -0500 by bjdouros