[redis] cache and distributed lock


What data is suitable for caching?

  • The requirements for immediacy and data consistency are not high
  • Data with large access and low update frequency (more reads and less writes)

Integrate redis

  1. Introducing data redis starter
  2. Simply configure the host information of redis
  3. Use the StringRedisTemplate automatically configured by SpringBoot to operate redis

Out of heap memory overflow exception

Reason: after spring boot 2.0, lettuce is used as the redis client by default. It uses netty for network communication
The lettuce bug causes an out of heap memory overflow of - Xmx300m. If netty does not specify an out of heap memory size, it defaults to - Xmx300m, which can be set through - Dio.netty.maxDirectMemory
Solution: you can't use - Dio.netty.maxDirectMemory to call only a lot of external memory

  1. Upgrade lettuce client
  2. Or switch to jedis


Cache invalidation of high parallel delivery - cache penetration

It refers to querying a data that must not exist. Because the cache does not hit, we will query the database, but the database has no secondary records. We do not write the null of this query into the cache, which will cause the non-existent data to be queried in the storage layer every request, losing the significance of caching

Using nonexistent data to attack, the instantaneous pressure of the database increases, and finally leads to collapse
null result cache and add a short expiration time

Cache invalidation of high parallel delivery - cache avalanche

Cache avalanche means that when we set the cache, the key uses the same expiration time, resulting in the cache invalidation at the same time at a certain time. All requests are forwarded to the DB, and the DB is under excessive pressure at the moment

Add a random value to the original expiration time, such as 1-5 minutes, so that the repetition rate of the expiration time of each cache will be reduced, and it is difficult to cause collective failure events

Cache failure of high parallel delivery - cache breakdown

For some keys with expiration events set, if these keys may be accessed by ultra-high concurrency at some time points, they are very "hot" data,
If the key fails just before a large number of strong and strong come in at the same time, the data query of the key will fall to db, which is called cache breakdown

A large number of concurrency only allows one person to check, others wait, release the lock after finding it, and others get the lock. First check the cache, there will be data, not the DB


Local lock

As long as it is the same lock, all threads that need this lock can be locked,

  1. Synchronized (this): all springboot components are singleton in the container, and this service is also singleton
    Local locks can only lock the current process, so we need distributed locks

Distributed lock

Implementation principle:
redis setnx("lock", 1111) can be put into all the occupied pits. When the pit is occupied, the business will be executed when it returns ok. If the pit is not occupied, it will spin and wait
One hundred of them come in concurrently. There is no data in the cache. You need to adjust the database. Multiple services go to redis to grab the pit together. The thread of the service you grab goes to check the database, and then puts things in the cache. The remaining 99 "queue up" to get the redis pit, and then find that the cache is available again and return directly.
When the later data comes back, if there is something in the cache, you won't go to redis to occupy the pit

setnx occupies a good pit, but there are exceptions or program downtime when checking the database, and the logic of deleting locks is not implemented, which leads to deadlock
Set the automatic expiration time of the lock. Even if it is not deleted, it will be deleted automatically.
[when setting the expiration time, the server is powered off or deadlocked]
Therefore, pit occupation and setting expiration time must be an atomic command
Command to acquire lock and set expiration time at the same time: setnxex("lock", 111,10s)
[if the business lasts a long time and the lock expires, then we delete the lock, which may delete the lock held by others]
When the lock is occupied, the value is specified as uuid, and each person's matching lock is deleted

[when deleting a lock, it is found that it is your own lock. When you are ready to delete it, your lock expires and someone else gets the lock]
Query and delete must also be an atomic operation, which cannot be divided into two steps
Deleting locks using lua script

String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
redisTemplate.execute(new DefaultRedisScript<Long>(script, Long.class), Collections.singletonList("lock"), uuid);


Ensure the atomicity of locking [occupation + expiration time] and deleting lock [judgment + deletion]
More complex locks are automatically renewed. This can set the lock expiration time to be longer

     * Distributed lock
     * @return
    public Map<String, List<Catalog2Vo>> getCatalogJsonRedisLockFromDB() {
        //1. Occupy the distributed lock and go to redis to occupy the pit
        String uuid = UUID.randomUUID().toString();
        Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock", uuid, 300, TimeUnit.SECONDS);
        if (lock){
            System.out.println(" Successfully obtained distributed lock!");
            Map<String, List<Catalog2Vo>> dataFromDb;
            try {
                dataFromDb = getCatalogJsonFromDB();
            }finally {
                //Delete lock
                String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
                Long lock1 = redisTemplate.execute(new DefaultRedisScript<Long>(script, Long.class), Collections.singletonList("lock"), uuid);

            return dataFromDb;
        }else {
            //Locking failed.. try again
            System.out.println("Failed to acquire distributed lock, wait for retry");
            try {
            } catch (InterruptedException e) {
            return getCatalogJsonRedisLockFromDB();

How to ensure double write consistency between redis and mysql

Tags: Java Redis Distribution

Posted on Thu, 25 Nov 2021 14:27:03 -0500 by php_bachir