Secondary cache of MyBatis
MyBatis cache is divided into two levels: first level cache and second level cache. First level cache is SqlSession level cache and second level cache is mapper level cache. But this blog mainly introduces the cache interface and cache key interface in mybaits, as well as some cache implementations.
I wrote a blog before that briefly introduced Hibernate's two-level cache.
Links: https://blog.csdn.net/Let_me_...
Cache interface
Source location: org.apache.ibatis.cache.Cache
Cache container interface, user-defined operation methods, and other cache implementation classes need to implement this interface.
UML class diagram
The implementation classes of the Cache interface are listed in the figure above, which implement different Cache functions.
The source code of the interface is very simple. It defines some methods of adding, deleting, modifying and querying Cache, and a method of obtaining the number of Cache in the container and the read-write lock. I simplified the following notes, the source content is as follows. The Cache interface is actually a Cache container, which is similar to a HashMap (some implementation classes use HashMap to save operation Cache data).
public interface Cache { /** * Get identification */ String getId(); /** * Add specified key */ void putObject(Object key, Object value); /** * Gets the value of the specified key */ Object getObject(Object key); /** * Delete the value of the specified key */ Object removeObject(Object key); /** * wipe cache */ void clear(); /** * Get the number of caches in the container */ int getSize(); /** * Get read-write lock */ default ReadWriteLock getReadWriteLock() { return null; } }
PerpetualCache
Source location: org.apache.ibatis.cache.impl.perpetual cache
The cache that never expires uses HashMap to save and operate data, rewrites the equals() and hashCode() methods, and other cache operations are directly called HashMap methods.
LoggingCache
Source location: org.apache.ibatis.cache.decorators.LoggingCache
This is a Cache implementation that supports printing logs. The code is also very simple, with some comments.
public class LoggingCache implements Cache { /** * mybaits log object */ private final Log log; /** * Decorated Cache object */ private final Cache delegate; /** * Count the number of requests to cache */ protected int requests = 0; /** * Number of hits to cache */ protected int hits = 0; public LoggingCache(Cache delegate) { this.delegate = delegate; this.log = LogFactory.getLog(getId()); } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { return delegate.getSize(); } @Override public void putObject(Object key, Object object) { delegate.putObject(key, object); } @Override public Object getObject(Object key) { //Number of requests plus 1 requests++; final Object value = delegate.getObject(key); if (value != null) { //Hit cache, hit times plus 1 hits++; } if (log.isDebugEnabled()) { //Print the cache hits log.debug("Cache Hit Ratio [" + getId() + "]: " + getHitRatio()); } return value; } @Override public Object removeObject(Object key) { return delegate.removeObject(key); } @Override public void clear() { delegate.clear(); } @Override public int hashCode() { return delegate.hashCode(); } @Override public boolean equals(Object obj) { return delegate.equals(obj); } /** * Calculate hit ratio * @return */ private double getHitRatio() { //Algorithm: hits / requests cache return (double) hits / (double) requests; } }
BlockingCache
Source location: org.apache.ibatis.cache.decorators.BlockingCache
Blocking Cache implementation class, which implements different logic in locking. When a thread goes to get the Cache, if the Cache does not exist, it will block the subsequent threads to get the Cache. The current thread will add the Cache value to avoid the subsequent threads adding the Cache repeatedly. It should be noted that the removeObject method in this implementation does not delete the Cache value, but remove the lock.
public class BlockingCache implements Cache { /** * Blocking wait timeout */ private long timeout; /** * Decorated Cache object */ private final Cache delegate; /** * Cache key and ReentrantLock object mapping */ private final ConcurrentHashMap<Object, ReentrantLock> locks; public BlockingCache(Cache delegate) { this.delegate = delegate; this.locks = new ConcurrentHashMap<>(); } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { return delegate.getSize(); } @Override public void putObject(Object key, Object value) { try { //Add cache delegate.putObject(key, value); } finally { //Release lock releaseLock(key); } } @Override public Object getObject(Object key) { //Get lock acquireLock(key); //Get cache execution Object value = delegate.getObject(key); if (value != null) { //Release lock releaseLock(key); } return value; } @Override public Object removeObject(Object key) { // despite of its name, this method is called only to release locks //Release the lock corresponding to the key releaseLock(key); return null; } @Override public void clear() { delegate.clear(); } /** * Get ReentrantLock object, add if it does not exist * @param key * @return */ private ReentrantLock getLockForKey(Object key) { return locks.computeIfAbsent(key, k -> new ReentrantLock()); } private void acquireLock(Object key) { //Get ReentrantLock object corresponding to key Lock lock = getLockForKey(key); //Acquire lock until timeout if (timeout > 0) { try { boolean acquired = lock.tryLock(timeout, TimeUnit.MILLISECONDS); if (!acquired) { throw new CacheException("Couldn't get a lock in " + timeout + " for the key " + key + " at the cache " + delegate.getId()); } } catch (InterruptedException e) { throw new CacheException("Got interrupted while trying to acquire lock for key " + key, e); } } else { //Release lock lock.lock(); } } private void releaseLock(Object key) { //Get ReentrantLock object ReentrantLock lock = locks.get(key); if (lock.isHeldByCurrentThread()) { //Release if the current thread holds a lock lock.unlock(); } } public long getTimeout() { return timeout; } public void setTimeout(long timeout) { this.timeout = timeout; } }
SynchronizedCache
Source location: org.apache.ibatis.cache.decorators.SynchronizedCache
Synchronous Cache implementation, internal also uses decorative Cache to achieve Cache operations. However, this implementation adds the synchronized keyword to the getSize putObject getObject removeObject clear methods.
SerializedCache
Source location: org.apache.ibatis.cache.decorators.SerializedCache
Support serialization value, in fact, serializing the value when adding cache and deserializing when getting value.
public class SerializedCache implements Cache { //Decorated Cache private final Cache delegate; public SerializedCache(Cache delegate) { this.delegate = delegate; } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { return delegate.getSize(); } @Override public void putObject(Object key, Object object) { if (object == null || object instanceof Serializable) { //Save value for serialization delegate.putObject(key, serialize((Serializable) object)); } else { throw new CacheException("SharedCache failed to make a copy of a non-serializable object: " + object); } } @Override public Object getObject(Object key) { Object object = delegate.getObject(key); //Value for deserialization return object == null ? null : deserialize((byte[]) object); } @Override public Object removeObject(Object key) { return delegate.removeObject(key); } @Override public void clear() { delegate.clear(); } @Override public int hashCode() { return delegate.hashCode(); } @Override public boolean equals(Object obj) { return delegate.equals(obj); } private byte[] serialize(Serializable value) { try (ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos)) { oos.writeObject(value); oos.flush(); return bos.toByteArray(); } catch (Exception e) { throw new CacheException("Error serializing object. Cause: " + e, e); } } private Serializable deserialize(byte[] value) { Serializable result; try (ByteArrayInputStream bis = new ByteArrayInputStream(value); ObjectInputStream ois = new CustomObjectInputStream(bis)) { result = (Serializable) ois.readObject(); } catch (Exception e) { throw new CacheException("Error deserializing object. Cause: " + e, e); } return result; } public static class CustomObjectInputStream extends ObjectInputStream { public CustomObjectInputStream(InputStream in) throws IOException { super(in); } @Override protected Class<?> resolveClass(ObjectStreamClass desc) throws ClassNotFoundException { return Resources.classForName(desc.getName()); } } }
ScheduledCache
Source location: org.apache.ibatis.cache.decorators.ScheduledCache
Clear the Cache of the whole Cache regularly, and judge whether to clear the Cache before each operation
public class ScheduledCache implements Cache { /** * Decorated Cache object */ private final Cache delegate; /** * Clear interval in milliseconds */ protected long clearInterval; /** * Last clear time in milliseconds */ protected long lastClear; public ScheduledCache(Cache delegate) { this.delegate = delegate; //Default clear space every hour this.clearInterval = 60 * 60 * 1000; // 1 hour this.lastClear = System.currentTimeMillis(); } public void setClearInterval(long clearInterval) { this.clearInterval = clearInterval; } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { //Determine whether to clear all clearWhenStale(); return delegate.getSize(); } @Override public void putObject(Object key, Object object) { //Determine whether to clear all clearWhenStale(); delegate.putObject(key, object); } @Override public Object getObject(Object key) { //Determine whether to clear all return clearWhenStale() ? null : delegate.getObject(key); } @Override public Object removeObject(Object key) { //Determine whether to clear all clearWhenStale(); return delegate.removeObject(key); } @Override public void clear() { //Record emptying time lastClear = System.currentTimeMillis(); delegate.clear(); } @Override public int hashCode() { return delegate.hashCode(); } @Override public boolean equals(Object obj) { return delegate.equals(obj); } /** * Determine whether to clear all * @return */ private boolean clearWhenStale() { if (System.currentTimeMillis() - lastClear > clearInterval) { //Empty all clear(); return true; } return false; } }
FifoCache
Source location: org.apache.ibatis.cache.decorators.FifoCache
The Cache implementation based on the first in, first out elimination mechanism does not delete the Cache key when the Cache is deleted, so the old key will still exist.
In addition, when adding a key, the implementation will not judge whether the key already exists. It will only judge whether the current length exceeds the upper limit of the queue, so repeated adding will result in multiple identical keys in the queue. This cannot be called a bug, but only allows repeated keys. However, if you find that the same key has more than one cache, you may need to Check if you are using the FifoCache implementation.
public class FifoCache implements Cache { /** * Decorated Cache object */ private final Cache delegate; /** * Double end queue, record the addition of key value */ private final Deque<Object> keyList; /** * Upper limit of queue */ private int size; public FifoCache(Cache delegate) { this.delegate = delegate; this.keyList = new LinkedList<>(); this.size = 1024; } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { return delegate.getSize(); } public void setSize(int size) { this.size = size; } @Override public void putObject(Object key, Object value) { //Loop keyList cycleKeyList(key); delegate.putObject(key, value); } @Override public Object getObject(Object key) { return delegate.getObject(key); } @Override public Object removeObject(Object key) { //Delete cache, data in keyList is not cleared return delegate.removeObject(key); } @Override public void clear() { //When clearing the cache, the maintained keyList is also cleared delegate.clear(); keyList.clear(); } private void cycleKeyList(Object key) { //Add to keyList keyList.addLast(key); if (keyList.size() > size) { //If the length of the ketList after adding a new key is greater than the queue online, remove the first place of the queue Object oldestKey = keyList.removeFirst(); delegate.removeObject(oldestKey); } } }
LruCache
Source location: org.apache.ibatis.cache.decorators.LruCache
This implementation is based on the Cache implementation of the least used obsolescence mechanism. In short, when the Cache is added and found to have reached the upper limit, the least used key and the corresponding Cache are eliminated. Use the elimination mechanism of LinkedHashMap. For the specific source code analysis, please refer to the source code below.
public class LruCache implements Cache { /** * Decorated Cache object */ private final Cache delegate; /** * Implementation of elimination mechanism based on LinkedHashMap */ private Map<Object, Object> keyMap; /** * Oldest / least used key, key to be eliminated */ private Object eldestKey; public LruCache(Cache delegate) { this.delegate = delegate; //Initializing the keyMap object setSize(1024); } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { return delegate.getSize(); } /** * Initialize the keyMap, but the permission ID of this method is public. That is to say, you can specify the length of the keyMap through this method. The default value is 1024, which can be changed according to your own needs * @param size */ public void setSize(final int size) { //A constructor of LinkedHasmp. When the parameter accessOrder is true, it will be sorted according to the order of access. The most recently accessed is the first, and the earliest accessed is the last keyMap = new LinkedHashMap<Object, Object>(size, .75F, true) { private static final long serialVersionUID = 4267176411845948333L; @Override protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) { //Override the delete element method of LinkedHashMapLinkedHashMap. When certain conditions are met, the element will be deleted. By default, it is not deleted in LinkedHashMap //The condition judged here is that the length of the keyMap is greater than the value given when initializing the keyMap. When the condition is met, set the least used key to be deleted, // The next time you add a new key, determine that the eldestKey parameter is not empty, and then remove the eldestKey boolean tooBig = size() > size; if (tooBig) { eldestKey = eldest.getKey(); } return tooBig; } }; } @Override public void putObject(Object key, Object value) { delegate.putObject(key, value); cycleKeyList(key); } @Override public Object getObject(Object key) { keyMap.get(key); //touch return delegate.getObject(key); } @Override public Object removeObject(Object key) { return delegate.removeObject(key); } @Override public void clear() { delegate.clear(); keyMap.clear(); } private void cycleKeyList(Object key) { //Add key to keyMap keyMap.put(key, key); //If the upper limit is exceeded, the least used can be deleted if (eldestKey != null) { //Remove eldestKey delegate.removeObject(eldestKey); //Empty space eldestKey = null; } } }
WeakCache
Source location: org.apache.ibatis.cache.decorators.WeakCache
The Cache implementation class based on java.lang.ref.WeakReference is mainly a strong reference based on internal maintenance and mybatis implements Cache elimination based on the WeakEntry extended by java.lang.ref.WeakReference. In this implementation, there is also the problem that multiple addition of key will lead to duplicate key.
public class WeakCache implements Cache { //Queue of strongly referenced keys private final Deque<Object> hardLinksToAvoidGarbageCollection; //The WeakEntry collection recycled by GC to avoid GC private final ReferenceQueue<Object> queueOfGarbageCollectedEntries; //Decorated Cache object private final Cache delegate; //Size of hardlinkstoavoidgarbage collection private int numberOfHardLinks; public WeakCache(Cache delegate) { this.delegate = delegate; this.numberOfHardLinks = 256; this.hardLinksToAvoidGarbageCollection = new LinkedList<>(); this.queueOfGarbageCollectedEntries = new ReferenceQueue<>(); } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); return delegate.getSize(); } public void setSize(int size) { this.numberOfHardLinks = size; } @Override public void putObject(Object key, Object value) { //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); //Add cache delegate.putObject(key, new WeakEntry(key, value, queueOfGarbageCollectedEntries)); } @Override public Object getObject(Object key) { Object result = null; @SuppressWarnings("unchecked") // assumed delegate cache is totally managed by this cache //The WeakReference object that gets the value WeakReference<Object> weakReference = (WeakReference<Object>) delegate.getObject(key); if (weakReference != null) { //Get value result = weakReference.get(); if (result == null) { //If the value is empty, it means that it has been recycled by GC and the cache has been removed delegate.removeObject(key); } else { //It's not empty. It's added to the head of the hardlinkstoavoidgarbage collection queue, and no key uniqueness judgment is made. Therefore, it's added repeatedly to avoid GC hardLinksToAvoidGarbageCollection.addFirst(result); if (hardLinksToAvoidGarbageCollection.size() > numberOfHardLinks) { //If the length exceeds the upper limit, remove the elements at the end of the queue hardLinksToAvoidGarbageCollection.removeLast(); } } } return result; } @Override public Object removeObject(Object key) { //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); //Remove cache return delegate.removeObject(key); } @Override public void clear() { //Clear hardlinkstoavoidgarbage collection hardLinksToAvoidGarbageCollection.clear(); //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); //wipe cache delegate.clear(); } /** * Remove keys that have been recycled by GC */ private void removeGarbageCollectedItems() { WeakEntry sv; while ((sv = (WeakEntry) queueOfGarbageCollectedEntries.poll()) != null) { delegate.removeObject(sv.key); } } /** * Inherit from WeakReference, add cache key attribute, */ private static class WeakEntry extends WeakReference<Object> { //key private final Object key; private WeakEntry(Object key, Object value, ReferenceQueue<Object> garbageCollectionQueue) { super(value, garbageCollectionQueue); this.key = key; } } }
SoftCache
Source location: org.apache.ibatis.cache.decorators.SoftCache
Based on the Cache implementation of java.lang.ref.SoftReference, SoftCache implements SoftEntry internally, while others are basically similar to WeakCache, except that hardlinkstoavoidgarbage collection will be locked in some operations.
public class WeakCache implements Cache { //Queue of strongly referenced keys private final Deque<Object> hardLinksToAvoidGarbageCollection; //The WeakEntry collection recycled by GC to avoid GC private final ReferenceQueue<Object> queueOfGarbageCollectedEntries; //Decorated Cache object private final Cache delegate; //Size of hardlinkstoavoidgarbage collection private int numberOfHardLinks; public WeakCache(Cache delegate) { this.delegate = delegate; this.numberOfHardLinks = 256; this.hardLinksToAvoidGarbageCollection = new LinkedList<>(); this.queueOfGarbageCollectedEntries = new ReferenceQueue<>(); } @Override public String getId() { return delegate.getId(); } @Override public int getSize() { //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); return delegate.getSize(); } public void setSize(int size) { this.numberOfHardLinks = size; } @Override public void putObject(Object key, Object value) { //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); //Add cache delegate.putObject(key, new WeakEntry(key, value, queueOfGarbageCollectedEntries)); } @Override public Object getObject(Object key) { Object result = null; @SuppressWarnings("unchecked") // assumed delegate cache is totally managed by this cache //The WeakReference object that gets the value WeakReference<Object> weakReference = (WeakReference<Object>) delegate.getObject(key); if (weakReference != null) { //Get value result = weakReference.get(); if (result == null) { //If the value is empty, it means that it has been recycled by GC and the cache has been removed delegate.removeObject(key); } else { //It's not empty. It's added to the head of the hardlinkstoavoidgarbage collection queue, and no key uniqueness judgment is made. Therefore, it's added repeatedly to avoid GC hardLinksToAvoidGarbageCollection.addFirst(result); if (hardLinksToAvoidGarbageCollection.size() > numberOfHardLinks) { //If the length exceeds the upper limit, remove the elements at the end of the queue hardLinksToAvoidGarbageCollection.removeLast(); } } } return result; } @Override public Object removeObject(Object key) { //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); //Remove cache return delegate.removeObject(key); } @Override public void clear() { //Clear hardlinkstoavoidgarbage collection hardLinksToAvoidGarbageCollection.clear(); //Remove the WeakEntry that has been recycled by GC removeGarbageCollectedItems(); //wipe cache delegate.clear(); } /** * Remove keys that have been recycled by GC */ private void removeGarbageCollectedItems() { WeakEntry sv; while ((sv = (WeakEntry) queueOfGarbageCollectedEntries.poll()) != null) { delegate.removeObject(sv.key); } } /** * Inherit from WeakReference, add cache key attribute, */ private static class WeakEntry extends WeakReference<Object> { //key private final Object key; private WeakEntry(Object key, Object value, ReferenceQueue<Object> garbageCollectionQueue) { super(value, garbageCollectionQueue); this.key = key; } } }
CachKey
Source location: org.apache.ibatis.cache.CacheKey
The cache key in mybaits is not just a String string, but is composed of multiple objects to calculate the cache key together. CacheKey encapsulates multiple attributes that affect the cache.
public class CacheKey implements Cloneable, Serializable { private static final long serialVersionUID = 1146682552656046210L; //Single empty cache key public static final CacheKey NULL_CACHE_KEY = new NullCacheKey(); //Value of multiplier private static final int DEFAULT_MULTIPLYER = 37; //Value of hashcode private static final int DEFAULT_HASHCODE = 17; //Coefficient of hashcode evaluation private final int multiplier; //hashcode of cache key private int hashcode; //Checksum private long checksum; //Length of updateList private int count; // 8/21/2017 - Sonarlint flags this as needing to be marked transient. While true if content is not serializable, this is not always true and thus should not be marked transient. //A collection of objects to compute hashcode private List<Object> updateList; public CacheKey() { this.hashcode = DEFAULT_HASHCODE; this.multiplier = DEFAULT_MULTIPLYER; this.count = 0; this.updateList = new ArrayList<>(); } public CacheKey(Object[] objects) { this(); //Update related properties based on objects updateAll(objects); } public int getUpdateCount() { return updateList.size(); } public void update(Object object) { //hashcode of method parameter object int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object); count++; //checksum sums baseHashCode checksum += baseHashCode; //Calculate hashcode value baseHashCode *= count; hashcode = multiplier * hashcode + baseHashCode; //Add object (CACHE key) to updateList updateList.add(object); } public void updateAll(Object[] objects) { //Traverse the objects array, call the update method, and update the relevant properties for (Object o : objects) { update(o); } } @Override public boolean equals(Object object) { if (this == object) { return true; } if (!(object instanceof CacheKey)) { return false; } final CacheKey cacheKey = (CacheKey) object; if (hashcode != cacheKey.hashcode) { return false; } if (checksum != cacheKey.checksum) { return false; } if (count != cacheKey.count) { return false; } for (int i = 0; i < updateList.size(); i++) { Object thisObject = updateList.get(i); Object thatObject = cacheKey.updateList.get(i); if (!ArrayUtil.equals(thisObject, thatObject)) { return false; } } return true; } @Override public int hashCode() { return hashcode; } @Override public String toString() { StringJoiner returnValue = new StringJoiner(":"); returnValue.add(String.valueOf(hashcode)); returnValue.add(String.valueOf(checksum)); updateList.stream().map(ArrayUtil::toString).forEach(returnValue::add); return returnValue.toString(); } @Override public CacheKey clone() throws CloneNotSupportedException { //Clone CacheKey object CacheKey clonedCacheKey = (CacheKey) super.clone(); //Create updateList array to avoid modification of the original array clonedCacheKey.updateList = new ArrayList<>(updateList); return clonedCacheKey; } }
NullCacheKey
Source location: org.apache.ibatis.cache.NullCacheKey
Inherited from CacheKey, empty cache key.
public final class NullCacheKey extends CacheKey { private static final long serialVersionUID = 3704229911977019465L; public NullCacheKey() { super(); } @Override public void update(Object object) { throw new CacheException("Not allowed to update a NullCacheKey instance."); } @Override public void updateAll(Object[] objects) { throw new CacheException("Not allowed to update a NullCacheKey instance."); } }public final class NullCacheKey extends CacheKey { private static final long serialVersionUID = 3704229911977019465L; public NullCacheKey() { super(); } @Override public void update(Object object) { throw new CacheException("Not allowed to update a NullCacheKey instance."); } @Override public void updateAll(Object[] objects) { throw new CacheException("Not allowed to update a NullCacheKey instance."); } }