Love buy back website optimization - Redis cache

Why cache? Reduce database pressure; Increase request speed. When the data is stored in the cache, it can be retrieved directly from the cache when i...
Why cache?
Redis introduction
Where Redis cache is used in the project
How to use Redis cache in projects
How to ensure that when the database is modified, the cache is updated synchronously
Possible problems in caching
Resolve cache penetration
Code 1 about caching in project
Redis elimination strategy

Why cache?

Reduce database pressure;
Increase request speed.
When the data is stored in the cache, it can be retrieved directly from the cache when it is queried again, so there is no need to request the database, which not only reduces the pressure of the database, but also improves the request speed.

Redis introduction

Distributed cache technology. The fastest caching technology, single threaded.

Where Redis cache is used in the project

Home page of the website. Because the homepage of a website must be the place with the highest concurrency, caching the homepage data can improve the website concurrency.

How to use Redis cache in projects

Once the homepage of love buy back website is loaded, it will query some necessary data from the database: commodity type, brand and commodity information. In the service layer that queries these data, first query the cache. If there is a cache, the data in the cache will be returned directly; if there is no cache, the database will be queried and the data queried from the database will be stored in the cache.
In this way, as long as the first person checks the database, the latter person can directly get the data in the cache

How to ensure that when the database is modified, the cache is updated synchronously

1. When the database is modified, the cached data is also modified
2. Imitate mysql's caching mechanism. Once modified, delete all caches

Possible problems in caching

Cache penetration and cache avalanche
Cache penetration: the phenomenon that a large number of data requests bypass the cache to directly access the database in a concurrent environment
Cache penetration will cause cache avalanche. Avalanche is a chain reaction with serious consequences. For example, due to the cache penetration, the number of connections in the database is exhausted, and the database cannot be used, which causes the threads in the thread pool in tomcat to be released very slowly, so that the number of threads in tomcat is exhausted, and then all users cannot access the website -- this series of effects is called avalanche -- because it is caused by cache, it is also called cache avalanche

Resolve cache penetration

Avoid data bypassing cache and accessing database directly
The plan:
1. synchronize
2. Double detection
3. No matter whether the database is found or not, it should be written to the cache to avoid using invalid conditions to query the database, resulting in cache failure
4. Set the expiration time for the empty cache, which should not be too long or too short. Too long will cause insufficient memory; too short will cause cache failure.

Code 1 about caching in project

@Component public class RedisHelper { @Autowired private StringRedisTemplate stringRedisTemplate; public <T> List<T> cache(String key,Type type,IDBResultCallBack<T> idbResultCallBack){ // What we get from the cache is a string List<T> query = null; // Lock to avoid that when someone has checked the data but hasn't saved it, someone else will check it synchronized (this){ System.out.println("Queue in"); String jsonStr = stringRedisTemplate.boundValueOps(key).get(); Gson gson = new Gson(); query = gson.fromJson(jsonStr, type); if (query == null){ System.out.println("Check the database"); query = idbResultCallBack.callback(); // Cache the data found if (query != null || !query.isEmpty()){ String toJson = gson.toJson(query, type); stringRedisTemplate.boundValueOps(key).set(toJson); } else { // When the data found from the database is empty, it should also be stored in the cache (to avoid cache penetration) // But if a large number of empty key s are stored in the cache, it will not be enough to use all the empty data cache to set the expiration time String toJson = gson.toJson(query, type); stringRedisTemplate.boundValueOps(key).set(toJson,30, TimeUnit.SECONDS); } } } return query; } }

Problems:
When multiple threads are concurrent, it will cause all threads to queue up to enter the lock, which will be very slow

Solution:
Add another layer of detection outside the lock. If there is a value in the cache, you don't need to wait for the lock to enter, and directly return the value in the cache

@Component public class RedisHelper { @Autowired private StringRedisTemplate stringRedisTemplate; public <T> List<T> cache(String key,Type type,IDBResultCallBack<T> idbResultCallBack){ List<T> query = null; Gson gson = new Gson(); String str = stringRedisTemplate.boundValueOps(key).get(); query = gson.fromJson(str, type); // The first layer of detection prevents all threads from waiting to enter the lock if (query == null){ // Lock to avoid that when someone has checked the data but hasn't saved it, someone else will check it synchronized (this){ System.out.println("Queue in"); String jsonStr = stringRedisTemplate.boundValueOps(key).get(); gson = new Gson(); query = gson.fromJson(jsonStr, type); if (query == null){ System.out.println("Check the database"); query = idbResultCallBack.callback(); // Cache the data found if (query != null || !query.isEmpty()){ String toJson = gson.toJson(query, type); stringRedisTemplate.boundValueOps(key).set(toJson); } else { // When the data found from the database is empty, it should also be stored in the cache (to avoid cache penetration) // But if a large number of empty key s are stored in the cache, it will not be enough to use all the empty data cache to set the expiration time String toJson = gson.toJson(query, type); stringRedisTemplate.boundValueOps(key).set(toJson,30, TimeUnit.SECONDS); } } } } return query; } }

Redis elimination strategy

Elimination strategy

Similar to GC recycle, it is triggered only when there is insufficient memory. The default elimination strategy is allkeys LRU: when the memory is insufficient, the cache that is rarely used recently will be eliminated first to make room.

Redis has 6 elimination strategies (3.2.9):

·noeviction (default): returns an error when the memory limit is reached and the client attempts to execute commands that will make more memory available (most write instructions, but DEL and a few exceptions)

Even if the memory is full, the data in the memory will not be eliminated. If the new data cannot be added, the exception information will be returned.

·All keys LRU: try to recycle the least used keys (LRU) so that the newly added data has space to store.

Using LRU algorithm, the least recently used key is eliminated to make room for new key.

·Volatile LRU: try to recycle the least used keys (LRUs), but only the keys in the expired collection, so that the newly added data has space.

(only obsolete keys will be eliminated) in the expired key set, calculate which key can be recycled first through LRU, delete the earliest eliminated key first, and make room for new key

·Allkeys random: Reclaim random keys to make room for newly added data.

·Volatile random: Reclaiming random keys allows space for newly added data, but only for keys in expired collections.

·Volatile TTL: reclaim the key in the expired collection, and preferentially reclaim the key with shorter TTL, so that the newly added data has space for storage.

weixin_44991320 Published 5 original articles, praised 0, visited 27 Private letter follow

10 January 2020, 08:37 | Views: 6131

Add new comment

For adding a comment, please log in
or create account

0 comments