SpringBoot integrates Spring Cache to simplify distributed cache development

preface

Last blog post, we This paper deeply introduces the integration of spring boot and Redis We use RedisTemplate or StringRedisTemplate to select different data structures in combination with scenarios, which will cause the cache code and business code to be tightly coupled. Is there a simpler way?

Answer: Yes, SpringCache.

In this blog post, we introduce SpringCache and how SpringCache unifies different caching technologies to access the project in an efficient and convenient way. Finally, we explain in depth how SpringCache solves cache breakdown, cache penetration, cache avalanche, and what are the shortcomings.

Introduction to Spring Cache

Spring Data Redis highly encapsulates the underlying redis development package (Jedis, JRedis, and RJC). RedisTemplate provides various redis operations, exception handling and serialization, supports publishing and subscription, and implements spring 3.1 Cache.

Spring Cache is not a Cache implementation technology. Spring Cache is a general technology for Cache implementation. Based on the Cache framework provided by spring, it is easier for developers to embed their Cache implementation into their projects efficiently and conveniently. Of course, spring Cache also provides its own simple implementations, such as NoOpCacheManager, ConcurrentMapCacheManager, etc. Through spring Cache, you can quickly embed your own Cache implementation.

  • Spring has defined org.springframework.cache.Cache and org.springframework.cache.CacheManager interfaces since 3.1 to unify different caching technologies; It also supports the use of JCache (JSR-107) annotations to simplify development;
  • The Cache interface is defined by the component specification of the Cache, including various operation sets of the Cache; Spring provides various Xcache implementations under the Cache interface; Such as RedisCache, EhCacheCache, ConcurrentMapCache, etc;
  • Each time a method of a function that needs to be cached is called, Spring will check whether the specified parameters and the specified target method have been called; If yes, get the result of the method call directly from the cache. If not, call the method and cache the result and return it to the user. The next call is taken directly from the cache.
  • When using Spring cache abstraction, we need to pay attention to the following two points:
    • Cache declaration: identify the methods that need to be cached and their caching policies
    • Cache configuration: read the data stored in the previous cache from the cache

Integrate SpringCache to simplify development

SpringCache is the upper encapsulation of cache, and RedisCache is the lower implementation. In this blog post, we will implement distributed cache in combination with Redis. We take caching user data as an example to implement our case. The table creation statement and related contents of mybatis are available in the source code. We will show them one by one. You can view them in the source code. The overall directory of the project is shown in the following figure:

Introduce dependency

<!--Introducing cache scenarios-->
<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<!--use redis As cache Middleware-->
<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

Use steps

Using spring cache is actually very simple. It's like loading an elephant into a refrigerator in two steps.

1. Enable caching @ EnableCaching

@EnableConfigurationProperties(CacheProperties.class)
@Configuration
@EnableCaching
public class CacheConfig {
  //Other contents are omitted for the time being
}

2. Use annotations to complete caching operations

@Repository
@CacheConfig(cacheNames = "users")
public class UserDao implements IUserDao{
    @Autowired
    private UserMapper userMapper;
    @Cacheable(key = "'getTotalCount'")
    @Override
    public int getTotalCount(){
        int totalCount = userMapper.getTotalCount();
        return totalCount;
    }

    @Cacheable(key = "#userId")
    @Override
    public User getUser(Integer userId){
      return  userMapper.getUser(userId);
    }
    @Caching(evict = {
            @CacheEvict(key = "'getUsers'"),
            @CacheEvict(key = "'getTotalCount'")
    })
    @Override
    public void insertUser(User u){

        userMapper.insertUser(u);
    }
    @Cacheable(key = "'getUsers'")
    @Override
    public List<User> getUsers(){

       return userMapper.getUsers();
    }
    @Caching(evict = {
            @CacheEvict(key = "'getUsers'")
    })
    @Override
    public void updateUserNameById(Integer userId, String name){

        userMapper.updateUserNameById(userId, name);
    }
    @Caching(evict = {
            @CacheEvict(key = "'getUsers'"),
            @CacheEvict(key = "'getTotalCount'"),
            @CacheEvict(key = "#userId")
    })
    @Override
    public void deleteUser(Integer userId){
        userMapper.deleteUser(userId);
    }

    /**
     * Call the method to update the cached data, modify the data of the database and update the new cache at the same time.
     */
    @Caching(evict = {
            @CacheEvict(key="'getUsers'")
    },put = {@CachePut(key = "#result.id")})
    @Override
    public User updateUser(User user){
        userMapper.updateUser(user);
        return user;
    }
}

Test cases are in the source code examples.

Add configuration

server:
  port: 8084

spring:
  application:
    name: springboot-cache
  datasource:
    url: jdbc:mysql://localhost:3306/user_db_test
    username: root
    password: admin123
    type: com.zaxxer.hikari.HikariDataSource
    driver-class-name: com.mysql.cj.jdbc.Driver
  redis:
    # Redis server address
    host: localhost
    # Redis server connection port
    port: 6379
    # Redis server connection password (blank by default)
    password:
    # Redis database index (0 by default)
    database: 0
  # Connection timeout (MS)
    timeout : 300
    client-type: lettuce #Switch the jedis client to jedis
    lettuce: #Switch the jedis client to jedis
      pool:
        # Maximum number of connections in the connection pool (negative value indicates no limit)
        max-active: 8
        # Maximum blocking wait time of connection pool (negative value indicates no limit)
        max-wait: -1
        # Maximum free connections in the connection pool
        max-idle: 8
        # Minimum free connections in connection pool
        min-idle: 0
  cache:
    type: redis
    redis:
      #Whether to cache null values to prevent cache penetration
      cache-null-values: true
      #Cache expiration time (in milliseconds)
      time-to-live: 100000
      #Cache prefix is used to distinguish other caches. No prefix is specified. The cache name is used as the prefix by default
#      key-prefix: CACHE_
      #Whether to use cache prefix. false: no cache prefix is used
#      use-key-prefix: false

# Configure mybatis rules
mybatis:
  config-location: classpath:mybatis/mybatis-config.xml  #Global profile location
  mapper-locations: classpath:mybatis/mapper/*.xml  #sql mapping file location

Annotation details

  • @Cacheable: Triggers cache population
  • @CacheEvict: Triggers cache eviction
  • @CachePut: Updates the cache without interfering with the method execution
  • @Caching: Regroups multiple cache operations to be applied on a method
  • @Cacheconfig: shares some common cache related settings at class level

Annotation parameters

  • @Cacheable The result representing the current method needs to be cached. If there is in the cache, the method does not need to be called. If there is no in the cache, the method will be called, and finally the result of the method will be put into the cache. Value: for each data to be cached, we specify which name to put in the cache. [cache partition (by business type)]. If you specify the cache prefix spring. Cache. Redis. Key prefix = cache_@ The value in Cacheable(value={"user"}) will be invalid! Key: the key value of the cache object stored in the Map set. By default, the key value is the combination of all parameters of the function. If you configure it yourself, you need to use the spiel expression; Note: when using a SpEL expression, the string must be enclosed in single quotes. condition: add additional cache conditions, and the data meeting the conditions will be cached. The syntax is spiel. unless: configure the conditions under which records are not cached. The syntax is spiel. sync: synchronous acquisition with synchronization lock and update operation. Default behavior:
    1. There is data in the cache, and the method is not called
    2. For the cached value, the jdk serialization mechanism is used by default to store the serialized data in redis;
    3. The key is generated by default. If it is not specified, the default is user::SimpleKey []; You can specify through the key attribute to receive the value of a spiel expression;
    4. The default time is - 1 and never expires. You can configure the expiration time in the configuration file;
  • @CacheEvict Delete cache, [failure mode] allEntries: indicates whether all elements in the Cache need to be cleared. The default value is false, indicating that it is not required. When allEntries is specified as true, Spring Cache will ignore the specified key. Sometimes we need to clear all elements by Cache, which is more efficient than clearing elements one by one. befareInvocation: by default, the clearing operation is triggered after the corresponding method is successfully executed, that is, if the method fails to return successfully due to throwing an exception, the clearing operation will not be triggered. Use beforeInvocation to change the time when the purge operation is triggered. When we specify that the attribute value is true, Spring will purge the specified elements in the cache before calling this method.
  • @CachePut Update the cache according to the return value, [double write mode]
  • @Caching Combining multiple cache operations; @ Caching allows multiple nested @ Cacheable, @ CachePut, and @ CacheEvict annotations on the same method

Available SpEL expressions

Each SpEL expression has a special context. In addition to using parameters to build expressions, the framework provides special metadata related to caching, such as parameter names. The following table lists the parameters available in the context, which you can use as key and conditional.

Name

Location

Description

Example

methodName

root object

The name of the method being executed

#root.methodName

method

root object

method executed

#root.method.name

target

root object

Executed object

#root.target

targetClass

root object

class of execution object

#root.targetClass

args

root object

Parameters of the execution object (array)

#root.args[0]

caches

root object

Cache collection corresponding to the current method

#root.caches[0].name

argument name

evaluation context

Parameters of any method. If the parameter has not been assigned under special circumstances (e.g. has no debug information), the parameter can be represented by #a < #arg >, where #arg represents the parameter order, starting from 0

#iban or #a0 (you can also use the #p0 or #p < #arg > annotation to enable aliases)

result

evaluation context

The result of method execution (the object to be cached) can only be used in the empty expression, or cache put (used to calculate the key), or cache evict expression (when beforeInvocation=false). In order to support wrappers, such as Optional, #result points to the object of the century, not the wrapper

#result

Custom cache configuration

Customize the serialization method, cache prefix, default partition name, cache expiration time, whether to cache null value, etc.

@EnableConfigurationProperties(CacheProperties.class)//Enable the property configuration binding function
@Configuration
@EnableCaching //The annotation of the startup class with cache enabled is moved here from the startup class
public class CacheConfig {


    /**
     * Things in the configuration file are not used;
     * 1,Nothing in the original document was used
     * @ConfigurationProperties(prefix = "spring.cache")
     * public class CacheProperties {
     *
     * 2,Let him take effect
     * @return
     */
    @Bean
    public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory,CacheProperties cacheProperties){
         //Cache configuration object
         RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig();
         redisCacheConfiguration = redisCacheConfiguration
         //Serialization method: new GenericJackson2JsonRedisSerializer();
        .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new FastJsonRedisSerializer<>(Object.class)));

        CacheProperties.Redis redisProperties = cacheProperties.getRedis();
        if (redisProperties.getTimeToLive() != null) {
            redisCacheConfiguration = redisCacheConfiguration.entryTtl(redisProperties.getTimeToLive());
        }
        if (redisProperties.getKeyPrefix() != null) {
            redisCacheConfiguration = redisCacheConfiguration.prefixCacheNameWith(redisProperties.getKeyPrefix());
        }
        if (!redisProperties.isCacheNullValues()) {
            redisCacheConfiguration = redisCacheConfiguration.disableCachingNullValues();
        }
        if (!redisProperties.isUseKeyPrefix()) {
            redisCacheConfiguration = redisCacheConfiguration.disableKeyPrefix();
        }

        return RedisCacheManager
                .builder(RedisCacheWriter.nonLockingRedisCacheWriter(redisConnectionFactory))
                 .cacheDefaults(redisCacheConfiguration).build();
    }
}

principle

1. Auto configuration

CacheAutoConfiguration imports CacheProperties. CacheProperties is used to configure the basic properties of the cache. Import the CacheConfigurationImportSelector through import, and import the cache configuration of the response by setting the cache type by the user.

2. Configure Redis as cache

RedisCacheConfiguration will be automatically imported; RedisCacheConfiguration automatically configures the cache manager RedisCacheManager and RedisProperties.

Caching effect

In order to improve the system performance, we usually put some data into the cache to speed up access, while db undertakes the work of data dropping. However, in real business, the cache scenario can be divided into read environment scenario and write cache scenario according to read and write, and each has problems to pay attention to.

Read cache scenario

What data is suitable for caching?

  • The requirements for immediacy and data consistency are not high.
  • Data with large access and low update frequency (read more and write less)
  • The basic flow of the read scenario is as follows:

Read cache problem

Cache penetration

Description:

Cache penetration means that there is no data in the cache and the database, and the user continues to make requests. The requests will be directly hit to the database, and the data cannot be found, so it is impossible to write the cache, so it will also be hit to the database next time.

At this time, the cache does not work, and the request will go to the database every time. When the traffic is large, the database may be hung up. At this point, the cache seems to be "penetrated" and has no effect.

Solution:

  1. Interface verification. Verification is added to the interface layer, such as user authentication verification, data legitimacy verification, etc;
  2. Cache null value. The data that cannot be retrieved from the cache is not retrieved in the database. At this time, the key value pair can also be written as key null. The cache effective time can be set to a short time, such as 30 seconds (too long setting will make it impossible to use under normal conditions). This can prevent the attacking user from repeatedly attacking with the same id.
  3. Bloom filter. The bloom filter is used to store all possible accessed keys. Nonexistent keys are directly filtered, and the existing keys are further queried in the cache and database.

Buffer breakdown

Description:

Cache breakdown refers to a hot key with no data in the cache but some data in the database (generally when the cache time expires). At this time, due to the large number of concurrent users, the data is not read in the cache at the same time, and the data is fetched from the database at the same time, resulting in an instantaneous increase in the pressure on the database, resulting in excessive pressure

Solution:

  1. Set hotspot data to never expire. Directly set the cache to not expire, and then the scheduled task loads data asynchronously to update the cache. This method is applicable to extreme scenarios, such as those with extremely large traffic. When using it, you need to consider the time when the business can accept data inconsistencies and the handling of exceptions. Don't refresh the cache at that time. It will be cold if it is always dirty data.
  2. Add mutex. This method is the same as cache breakdown. Lock according to the key dimension. For the same key, only one thread is allowed to calculate. Other threads block in place, wait for the calculation result of the first thread, and then go directly to the cache.. In multiple concurrent requests, only the first request thread can get the lock and execute the database query operation. Other threads will block and wait until the first thread writes the data to the cache.

Cache avalanche

Description:

A large number of hot key s have the same expiration time, which leads to the failure of all caches at the same time, resulting in a large amount of instantaneous database requests, a sudden increase in pressure, an avalanche, and even the hanging of the database.

Cache avalanche is actually a bit like "upgraded cache breakdown". Cache breakdown is a hot key, and cache avalanche is a group of hot keys.

Solution:

  1. The expiration time is broken up. The expiration time of cached data is set randomly to prevent the expiration of a large amount of data at the same time.
  2. Cache distributed deployment. If the cache database is distributed, the hot data will be evenly distributed in different cache databases.
  3. Hotspot data does not expire. Set hotspot data to never expire.
  4. Add mutex. This method is the same as cache breakdown. Lock according to the key dimension. For the same key, only one thread is allowed to calculate. Other threads block in place, wait for the calculation result of the first thread, and then go directly to the cache.

Write cache scenario

Generally, there is no problem with reading cache steps, but once data update is involved: database and cache update, data consistency between redis and MySQL is easy to occur. There are two modes for cache inconsistency: Double write mode and failure mode.

Double write mode

Write the database and write the cache at the same time.

Problem 1: single thread, successful data update, failed cache update, resulting in inconsistent data.

Problem 2: multithreading. Due to the Caton problem, write cache 2 is at the top and write cache 1 is at the back.

As shown below:

failure mode

Whether writing MySQL database first and then deleting Redis cache; If you delete the cache first and then write to the library, data inconsistency may occur.

Question 1: if the data has changed, delete the cache first, and then modify the database. At this time, it has not been modified. A request comes over, reads the cache, finds that the cache is empty, queries the database, finds the old data before modification, and puts it into the cache. Then the program of data change completes the modification of the database. Data inconsistency occurs.

As shown below:

Both double write mode and failure mode will lead to cache inconsistency. How to deal with similar problems?

1. Cached data should not be real-time, and the consistency requirements are ultra-high. Therefore, add the expiration time when caching data to ensure that you can get the latest data every day.

2. If you encounter data with high real-time and consistency requirements, you should check the database, even slowly.

3. Lock to ensure concurrent reading and writing. When writing, line up in order. Reading doesn't matter. Therefore, it is suitable to use read-write lock.

4. If there is a need in the real business scenario, you can refer to the ultimate solution.

Ultimate solution

Asynchronous update cache (synchronization mechanism based on subscription binlog)

Overall technical idea:

MySQL binlog incremental subscription consumption + message queue + incremental data update to redis

1) Reading Redis: the thermal data are basically in Redis

2) Writing MySQL: adding, deleting, and modifying are all operations of MySQL

3) Update Redis data: the data operation binlog of MySQ is used to update to Redis

In this way, once new write, update, delete and other operations are generated in MySQL, the binlog is read and analyzed, and the message queue is used to push and update the redis cache data of each station.

Shortcomings of spring cache

Read mode

  • Cache penetration: query a null data; spring.cache.redis.cache-null-values: true Solution: cache empty data;
  • Cache breakdown: a large number of concurrent queries come in and query an expired data at the same time; Solution: it is unlocked by default; Use @ Cacheable(sync = true) to solve the breakdown problem;
  • Cache avalanche: a large number of Keys expire at the same time; Solution: add expiration time; spring.cache.redis.time-to-live: 100000 The three problems of read mode spring Cache are considered;

Write mode: (CACHE consistent with database)

  • Read / write locking;
  • Introduce Canal to sense the update of MySQL and update Redis;
  • Read more and write more. Just go to the database to query directly; The spring Cache in write mode is not specially processed. It is specially processed according to special services!

summary

  • General data (spring cache can be used for data with more reads and less writes, less timeliness and low consistency requirements)
  • Write mode (as long as the cached data has expiration time, the business allows short-term inconsistency);
  • Special data: special design, design out of business are playing hooligans.

Code example

The sample readers of this article can view the items in the following warehouse as follows:

<module>springboot-cache</module>
  • CodeChina: https://codechina.csdn.net/jiuqiyuliang/springboot-learning

Author: program ape Xiaoliang

Posted on Tue, 07 Dec 2021 03:34:41 -0500 by siri