Integration framework - Redis

Redis introduction

Redis (Remote Dictionary Server), i.e. remote dictionary service, is an open source log type key value database written in ANSI C language, supporting network, memory based and persistent, and provides API s in multiple languages. Since March 15, 2010, the development of redis has been hosted by VMware. Since May 2013, the development of redis has been sponsored by pivot.

  • redis supports data persistence, which means that the data in memory can be saved on disk and can be reused after restart.
  • Redis not only supports key value data, but also provides list, set, Zset, hash and other data storage structures
  • Redis supports data backup. Data backup in master slave mode

Redis data type

Redis supports five data types:

  1. String (string)
  2. Hash (hash)
  3. list
  4. set
  5. zsetsorted set: (ordered set).

string and hash are commonly used in practical projects

Advantages of Redis

  1. High performance. Redis can read 110000 times per second and write 80000 times per second
  2. The data type supports multiple data type operations such as string, lists, hashes sets and ordered sets
  3. Atomicity. All redis operations are atomicity, which is the same as the atomicity of transactions. They either succeed or fail

Differences between Redis and Memcached

Persistence mechanism of Redis

Redis provides two persistence mechanisms RDB and AOF:

  • It refers to recording all key value pairs of redis database in the way of dataset snapshot (semi persistent mode), writing data to a temporary file at a certain time point, and replacing the last persistent file with this temporary file after persistence, so as to achieve data recovery.

  • The rewrite mode of AOF mechanism. Before the AOF file is rewritten (the commands will be merged and rewritten when the file is too large), some commands can be deleted (such as misoperated flush)

RDB snapshots are generally used because of performance degradation. A snapshot is taken every 15 minutes, but disaster recovery may occur. However, this is also the persistence method recommended by the official and configured by default

Redis recycling strategy

  • Select the least recently used key data
  • Select the key data that will expire
  • Select any key data
  • Select the least recently used key data
  • Prohibition of expulsion data

Redis practice


  1. Prepare the vm virtual machine. Run the test locally.
  2. Installing system center 7
  3. Download redis startup


  1. Download address: http: //
    Select the latest and stable download. Since you can't find the one with the most downloads in the figure above

  2. Run the virtual machine login and the administrator sudo command. If you are a child user, I default to root for typing convenience

  3. Wget This is the version you downloaded. tar.gz

You can download it directly with the command or download it locally and put it in the folder for decompression

 Tar xzf redis-2.8.17.tar.gz


 cd redis-2.8.17

Enter redis to compile


After the Make compilation is completed, the compiled redis service program, redis server, and redis cli for testing client programs will appear in the redis-2.8.17 directory. Both programs are in the src directory under the installation directory and can be seen after compilation

Enter the src directory

cd src

Start redis server

./ redis-server

In this way, redis is started. You can see a large box. It indicates that the startup is successful

This startup mode belongs to the default startup mode, and the port number and configuration file are configured by default

Generally, during development, we will configure the redis conf configuration file, so the startup will also be started through the configuration parameters

cd src./redis-server ../redis.conf

You can view and configure the redis conf configuration file

config get * ./redis-server ../redis.conf

The internal parameters are also the parameters you want to configure in the following java. You can also set the password of auth redis. The default 6379 address of the port number is normally the local address. If you comment out the local address, all connections can be connected

Now that Redis has been started, you can restart one link after another to facilitate operation


The client starts and operates. You can test the methods of set put append for these data types
For the Get set operation, redis has 9 libraries. The default is select 0. You can learn about this

In this way, redis in a normal liunx environment has been assembled;
Next, we are developing things written in java. It is impossible to operate with redis client. We need to connect and interact;

jedis link

The jedis connection is used. This article has stated that jedis is used for connection. There are many connection methods on the redis official website. You can refer to them. The basic connection methods are similar. You can be familiar with one connection. You can basically understand others by looking at it;

  1. You need to prepare to use the jedis jar package, import dependencies, and add dependencies on pom. For details, please refer to other blogs. I can't send pictures here
  2. If you use jedis to use connection pool, you need to use the common pool jar package
  3. Other basic configuration development environment, software development, jdk and maven will not be mentioned
  4. Two factors that may affect the connection are redis configuration problems, firewall problems, and turning off the firewall

The configuration item in Redis.conf, bind 127.o.o.1, is the default configuration, indicating that only local connections are allowed
The configuration item in Redis.conf, requirepass xxx, sets the password, indicating that the client needs to bring the auth parameter, that is, the redis password

collocation method

  • First, configure parameters in application.yml in spring boot
  • Second, spring application configures parameters in bean configuration

Analysis of common redis configuration parameters

Redis database index (0 by default)


Redis server address

Redis server connection port


Redis server connection password (blank by default)


Maximum number of connections in the connection pool (negative value indicates no limit)


Maximum blocking wait time of connection pool (negative value indicates no limit)


Maximum free connections in the connection pool


Minimum free connections in connection pool


Connection timeout (MS)


Project connection dependency

<!--integrate redis-->

The first spring boot link

    public class RedisConfiguration  {
        Logger logger = LoggerFactory.getLogger(RedisCacheConfiguration.class);

        private String host;

        private int port;

        private int timeout;

        private int maxIdle;

        private long maxWaitMillis;

        private String password;

        public JedisPool redisPoolFactory() 
            JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
            //You can configure the above math. Since vlue has specified the value of application configuration, you can customize the configuration

            JedisPool jedisPool = new JedisPool(jedisPoolConfig, host, port, timeout, password);
			//Return a jedispool to get jedis. Jedis = jedispool. Getresource() to get jedis
            return jedisPool;


After obtaining the connection pool, you can

jedis = jedisPool.getResource() 

When obtaining jedis, considering the multithreading problem, you can lock the acquisition method, which may involve exception problems, and try catch

Finally, you can write a method, the closing method of jedis, or use it after obtaining jedis, try it, and finally close it

The second is spring configuration

You can define a configuration file, which is basically written in the same way as springboot
Another is bean configuration

<bean id = "jedispoolconfig" class = "redis.clients.jedispoolconfig">
		<p:"maxactive" = "600"/>
		<p:"maxidil" = "400"/>
		<p:"maxwait" = "200"/>
//  This configures a config to continue configuring property
//  Configure connection pool
<bean id = "jedispool" class ="redis.clients.jedsi.jedispool" scope="singleton">
		<constructor-arg index ="0" ref= ""jedispoolconfig"">
		<constructor-arg index="1">
				<bean class = "redis.clients.jedis.jedisshardinfo">
				<constructor-arg  name = "host" value = "127.0.01"/>
				<constructor-arg name = port value = "6379"/>

redis SpringBoot auto injection redis configuration principle

redis configuration information in the configuration file application.yml

  port: 6379
  #Connection timeout (MS)
  timeout: 2000
    #Maximum number of connections (negative number means no limit)
    max-active: 100
    #Maximum idle connections
    max-idle: 10
    #Maximum blocking wait time (negative number indicates no limit)
    max-wait: 100000   
  database: 0

Configuration will find the configuration file for injection

It will be automatically injected into jedis and lettuce, mainly depending on whether the lettuce thread pool is configured

The configuration file will be configured to build the configuration file

RedisProperties are automatically imported,
@Import({ LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class })
Import profile

Put it into redis template configuration.

redis serialization processing

As shown in the figure above, redisTemplate can be rewritten because conditionOnMissingBean is added

// Configure @ Configuration to rewrite redis template

  public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory factory, RedisSerializer fastJson2JsonRedisSerializer) {
  //  Copy and add configuration
    RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<>();
    //key adopts String serialization
    StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
    //value adopts fast JSON serialization.
    return redisTemplate;

Cache meaning architecture

redis and mysql consistency processing scheme

Generally, there is no problem in reading the cache, but once it involves data update: database and cache update, it is prone to data consistency between the cache and the database. Whether it is to write the database first and then delete the cache; If you delete the cache first and then write to the library, data inconsistency may occur. for instance:

1. If the Redis cache is deleted and another thread reads it before writing to MySQL, and finds that the cache is empty, read the data from the database and write it to the cache. At this time, the cache is dirty data.

2. If the library is written first, the thread writing to the library goes down before deleting the cache, and the cache is not deleted, data inconsistency will also occur.

Because write and read are concurrent, the order cannot be guaranteed, and there will be inconsistency between cache and database data. How to solve it? Here are two solutions, easy first and difficult later, which are selected and used in combination with business and technical costs.

1, Delayed double deletion strategy

redis.del(key) is performed before and after writing to the library, and a reasonable timeout is set. The specific steps are:

  • 1) Delete cache first

  • 2) Re write database

  • 3) Sleep for 500ms (depending on the specific service time)

  • 4) Delete the cache again.

So, how to determine the 500 milliseconds? How long should I sleep?

You need to evaluate the time-consuming of reading data and business logic of your own project. The purpose of this is to ensure that the read request ends and the write request can delete the dirty cache data caused by the read request.

Of course, this strategy also takes into account the time-consuming synchronization between redis and database master-slave. The sleep time of the last write data: on the basis of the time spent reading the data business logic, add hundreds of ms. For example: sleep for 1 second.

2, Set the expiration time of the cache

Theoretically, setting the expiration time for the cache is the solution to ensure the final consistency. All write operations are subject to the database. As long as the cache expiration time is reached, subsequent read requests will naturally read new values from the database and backfill the cache

Combined with the double deletion policy + cache timeout setting, the worst case is that the data is inconsistent within the timeout, which increases the time-consuming of write requests.

3, How to delete the cache again after writing the database?

One disadvantage of the above scheme is that after operating the database, the cache deletion fails for various reasons. At this time, data inconsistency may occur. Here, we need to provide a scheme to ensure retry.

1. Specific process of scheme I

(1) Update database data;

(2) Cache deletion failed due to various problems;

(3) Send the key to be deleted to the message queue;

(4) Consume the message yourself and get the key to be deleted;

(5) Continue to retry the delete operation until it succeeds.

However, this scheme has a disadvantage of causing a large number of intrusions into the line of business code. So there is scheme 2. In scheme 2, start a subscription program to subscribe to the binlog of the database to obtain the data to be operated. In the application, start another program to get the information from the subscriber and delete the cache.

2. Specific process of scheme II

(1) Update database data;

(2) The database will write the operation information into the binlog log;

(3) The subscriber extracts the required data and key;

(4) Start another non business code to obtain the information;

(5) Trying to delete the cache, it is found that the deletion failed;

(6) Sending these information to the message queue;

(7) Retrieve the data from the message queue and retry the operation.

The above schemes are often encountered in business. You can choose specific schemes according to the complexity of business scenarios and the requirements for data consistency

redis distributed lock implementation

When modifying existing data in the system, you need to read it first, then modify and save it. At this time, it is easy to encounter concurrency problems. Since modification and saving are not atomic operations, some data operations may be lost in concurrent scenarios. In a single server system, we often use local locks to avoid the problems caused by concurrency. However, when services are deployed in a cluster, local locks cannot take effect among multiple servers. At this time, distributed locks are needed to ensure data consistency.

Redis lock mainly uses the setnx command of redis.

Lock command: SETNX key value. When the KEY does not exist, set the KEY and return success; otherwise, return failure. KEY is the unique identifier of the lock. It is generally named according to the business.
Unlock command: DEL key, which releases the lock by deleting the key value pair so that other threads can obtain the lock through the SETNX command.
Lock timeout: EXPIRE key timeout, which sets the timeout of the key to ensure that even if the lock is not explicitly released, the lock can be automatically released after a certain period of time to prevent resources from being locked forever. Generally, the expiration time is 3-5 seconds. The service is basically completed. If it has not been processed, it will expire to ensure the entry of other threads.

if (setnx(key, 1) == 1){
    expire(key, 30)
    try {
        //TODO business logic
    } finally {

If the expiration time is not set

While ensuring the entry of other threads, if the service fails to process successfully and another thread enters the process, it will cause concurrency problems. This problem can be solved through ThreadLocal.withInitial(HashMap::new); Local thread copy to synchronize several.
Redis can count the re-entry of the lock, increase 1 when locking, decrease 1 when unlocking, and release the lock when the count returns to 0.
This design can ensure that certain threads can enter and increase performance. If it is an addition and modification operation, it is recommended to extend the expiration time to prevent concurrent writes.
In fact, the pessimistic lock can also be processed in the code logic, so that the reentry lock can be processed

private static ThreadLocal<Map<String, Integer>> LOCKERS = ThreadLocal.withInitial(HashMap::new);
// Lock
public boolean lock(String key) {
  Map<String, Integer> lockers = LOCKERS.get();
  if (lockers.containsKey(key)) {
    lockers.put(key, lockers.get(key) + 1);
    return true;
  } else {
    if (SET key uuid NX EX 30) {
      lockers.put(key, 1);
      return true;
  return false;
// Unlock
public void unlock(String key) {
  Map<String, Integer> lockers = LOCKERS.get();
  if (lockers.getOrDefault(key, 0) <= 1) {
    DEL key
  } else {
    lockers.put(key, lockers.get(key) - 1);

Although the local record re-entry times are efficient, it will increase the complexity of the code if the expiration time and the consistency of local and Redis are considered. Another way is to implement distributed locking with Redis Map data structure, which not only stores the identification of the lock, but also counts the re-entry times. Example of redistribution locking:

// If lock_key does not exist
if ('exists', KEYS[1]) == 0)
    // Set lock_key thread ID 1 is locked'hset', KEYS[1], ARGV[2], 1);
    // Set expiration time'pexpire', KEYS[1], ARGV[1]);
    return nil;
// If lock_ The key exists and the thread ID is the thread ID of the current lock
if ('hexists', KEYS[1], ARGV[2]) == 1)
    // Self increasing
    then'hincrby', KEYS[1], ARGV[2], 1);
    // Reset expiration time'pexpire', KEYS[1], ARGV[1]);
    return nil;
// If locking fails, the remaining time of the lock is returned
return'pttl', KEYS[1]);

Avalanche penetration

When the key queried by the user does not exist in redis and the corresponding id does not exist in the database, it is attacked by illegal users. A large number of requests will be directly hit on the db, resulting in downtime and affecting the whole system. This phenomenon is called cache penetration.

Solution: cache empty data, such as empty string, empty object, empty array or list. The code is as follows

if (object != null ) {
           // Store data
} else {
 redisOperator.set("build", JsonUtils.objectToJson(object , 5*60));

Redis cache avalanche

Cache avalanche: a large number of data in the cache fails, and then a large number of requests come in. However, because all key s in redis fail, all requests will be sent to db, resulting in downtime

Solution: the expiration time is staggered. The expiration time is generated randomly, and the expiration time of hot data can be set longer, and that of non hot data can be set shorter

reference resources:

1. "Correct implementation of Redis distributed lock (Java version)"
2. "Cartoon: what is distributed lock?"
3. "Just read this article to understand the" distributed lock "
4. Redis deep Adventure: core principles and application practice
5. Upstream: Alibaba's road to technological growth
6. Zhong Menghao, order group of Xiaomi Information Department

Tags: Redis Spring Spring Boot

Posted on Wed, 01 Dec 2021 12:54:59 -0500 by ayok