1, ReentrantLock
after introducing AQS in the previous article, let's introduce the application of AQS: ReentrantLock. ReentrantLock is mainly implemented by CAS+AQS queue and supports fair lock and unfair lock.
example of ReentrantLock:
private Lock lock = new ReentrantLock(); public void test(){ lock.lock(); try{ doSomeThing(); }catch (Exception e){ // ignored }finally { lock.unlock(); } }
the basic implementation of ReentrantLock can be summarized as follows: first try to obtain the lock through CAS. If a thread has occupied the lock at this time, it will join the AQS queue and be suspended. When the lock is released, the thread at the top of the CLH queue will wake up, and then CAS will try to acquire the lock again. At this time, if:
- ReentrantLock instance is a non fair lock: if another thread comes in to try to obtain it at the same time, it may let this thread obtain it first;
- The ReentrantLock instance is a fair lock: if another thread comes in and tries to obtain it, when it finds that it is not at the head of the queue, it will queue to the end of the queue and the thread at the head of the queue will obtain the lock.
1.1 features of reentrantlock
- 1. Reentrant lock
reentrant lock means that the same thread can acquire the same lock multiple times. ReentrantLock and synchronized are both reentrant locks. - 2. Interruptible lock
interruptible lock refers to whether a thread can respond to an interrupt when trying to obtain a lock. synchronized is a non interruptible lock, while ReentrantLock provides interrupt function. - 3. Fair lock and unfair lock
fair lock means that when multiple threads try to acquire the same lock at the same time, the acquisition order of the lock should conform to the absolute time order on the request and meet the FIFO; Non fair locks allow threads to "jump the queue". synchronized is a non fair lock, and the default implementation of ReentrantLock is a non fair lock, but it can also be set as a fair lock.
ReentrantLock provides two construction methods:
public ReentrantLock() { sync = new NonfairSync(); } public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); }
the default constructor is initialized as a NonfairSync object, that is, a non fair lock, and a constructor with parameters can specify the use of fair and non fair locks.
- 4. You can set the timeout
1.2 reentry
to support reentry, we need to solve two problems:
- When a thread acquires a lock, if the thread that has acquired the lock is the current thread, it will directly acquire the lock again;
- Since the lock will be acquired n times, the lock can be fully released only after it is released the same N times.
- 1. Acquire lock
for the first question, let's see how ReentrantLock is implemented. Take the unfair lock as an example. To judge whether the current thread can obtain the lock, the core method is nonfairTryAcquire:
final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); //1. If the lock is not occupied by any thread, the lock can be obtained by the current thread if (c == 0) { if (compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } //2. If it is occupied, check whether the occupied thread is the current thread else if (current == getExclusiveOwnerThread()) { // 3. Get again and add one to the count int nextc = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }
in order to support reentry, processing logic is added in the second step. If the lock has been occupied by a thread, it will continue to check whether the occupying thread is the current thread. If so, the synchronization state plus 1 returns true, indicating that success can be obtained again. Each time the lock is retrieved, the synchronization state will be added by one.
- 2. Release lock
take the unfair lock as an example. The core method is tryRelease:
protected final boolean tryRelease(int releases) { //1. Synchronization status minus 1 int c = getState() - releases; if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; if (c == 0) { //2. Only when the synchronization status is 0, the lock is released successfully and returns true free = true; setExclusiveOwnerThread(null); } // 3. If the lock is not fully released, false is returned setState(c); return free; }
it should be noted that the re-entry lock must be released successfully until the synchronization state is 0, otherwise the lock has not been released. If the lock is obtained n times and released n-1 times, the lock is not fully released, and false is returned. Only after it is released n times can it be successfully released, and true is returned.
- 3. Example of reentry lock
public static void main(String[] args) { ReentrantLock lock = new ReentrantLock(); for (int i = 1; i <= 3; i++) { lock.lock(); System.out.println("lock"+i); } for(int i=1;i<=3;i++){ try { } finally { lock.unlock(); System.out.println("unlock"+i); } } }
results:
lock1
lock2
lock3
unlock1
unlock2
unlock3
1.3 unfair lock and unfair lock
1.3.1 unfair lock
that is, NonfairSync.
- 1. Acquire lock
source code:
final void lock() { if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); }
logic of lock method in NonfairSync: use a CAS operation to determine whether state is 0 (indicating that the current lock is not occupied). If it is 0, set it to 1, and set the current thread as the exclusive thread of the lock, indicating that the lock is obtained successfully. When multiple threads try to occupy the same lock at the same time, CAS operation can only ensure the success of one thread, and the remaining threads have to queue.
"unfairness" is reflected here. If the thread occupying the lock just releases the lock and the state is set to 0, and the thread queuing for the lock has not woken up, the new thread directly preempts the lock, then it will "jump in the queue".
if there are three threads competing for locks, assuming that the CAS operation of thread A is successful, threads B and C will execute the acquire method in AQS.
public final void acquire(int arg) { if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }
the logic of this method has been introduced in the AQS chapter and will not be repeated.
- 2. Release lock
that is, the unlock() method:
public void unlock() { sync.release(1); } public final boolean release(int arg) { if (tryRelease(arg)) { Node h = head; if (h != null && h.waitStatus != 0) unparkSuccessor(h); return true; } return false; }
the process of the unlock() method is to try to release the lock first. If the release is successful, check whether the state of the head node is SIGNAL. If so, wake up the thread associated with the next node of the head node. If the release fails, return false, indicating that the unlocking fails.
tryRelease source code:
/** * Release the lock occupied by the current thread * @param releases * @return Release successful */ protected final boolean tryRelease(int releases) { // Calculate the state value after release int c = getState() - releases; // If the lock is not occupied by the current thread, an exception is thrown if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; if (c == 0) { // The number of lock re-entry times is 0, indicating that the release is successful free = true; // Empty exclusive thread setExclusiveOwnerThread(null); } // Update state value setState(c); return free; }
the input parameter here is 1. The process of tryRelease is: if the thread currently releasing the lock does not hold the lock, an exception will be thrown. If the lock is held, calculate whether the released state value is 0. If it is 0, it indicates that the lock has been successfully released, and the exclusive thread is cleared. Finally, update the state value and return free.
1.3.2 fair lock
fairsync. The difference between fair locks and non fair locks is that when a fair lock obtains a lock, it does not check the state state first, but directly executes aqcuire(1). lock() source code of fairsync:
final void lock() { acquire(1); }
acquire is a template method in AQS. It will call the method of tryAcquire(int acquires) of the subclass (FairSync):
protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); if (c == 0) { if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; } }
the logic of this code is basically the same as that of nonfairTryAcquire. The only difference is that the logic judgment of hasQueuedPredecessors is added. This method is used to judge whether the current node has a precursor node in the synchronization queue. If there is a precursor node, it indicates that a thread requests resources earlier than the current thread. According to fairness, the current thread fails to request resources. If the current node does not have a precursor node, it is necessary to make the following logical judgment. Fair locks are obtained from the first node in the synchronization queue every time, while non fair locks are not necessarily. It is possible that the thread that just released the lock can obtain the lock again.
1.3.3 comparison of unfair lock and fair lock
first test with an example:
public class TestDemo { static Lock lock = new ReentrantLock(true); public static void main(String[] args) throws InterruptedException { for(int i=0;i<5;i++){ new Thread(new ThreadDemo(i)).start(); } } static class ThreadDemo implements Runnable { Integer id; public ThreadDemo(Integer id) { this.id = id; } @Override public void run() { try { TimeUnit.MILLISECONDS.sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } for(int i=0;i<2;i++){ lock.lock(); System.out.println("Thread obtaining lock:"+id); lock.unlock(); } } } }
according to the test results of fair lock, it can be seen that threads almost take turns to obtain the lock:
then change the above code to the implementation of unfair lock. It can be seen that the thread will repeatedly obtain the lock. If enough threads apply to acquire locks, some threads may not get locks for a long time. This is the "hunger" problem of non-public locks:
comparison between fair lock and unfair lock:
- Every time a fair lock acquires a lock, it is the first node in the synchronization queue, ensuring the absolute order of the time of requesting resources. Instead of a fair lock, the thread that just released the lock may continue to acquire the lock next time, which may cause other threads to never acquire the lock, resulting in "hunger".
- In order to ensure the absolute order in time, fair lock needs frequent context switching, while non fair lock will reduce some context switching and performance overhead. Therefore, ReentrantLock selects non fair lock by default to reduce some context switching and ensure greater system throughput.
1.4 response interrupt
when using synchronized to implement a lock, the thread blocking the lock will wait until it obtains the lock, that is, the behavior of waiting indefinitely to obtain the lock cannot be interrupted. ReentrantLock provides us with a lock acquisition method that can respond to interrupts, lockInterruptibly(), which can be used to solve the deadlock problem.
let's take a look at an example: two sub threads, which will try to obtain two locks respectively when running. One thread first acquires lock 1 and then acquires lock 2. The other thread is just the opposite. If there is no external interrupt, the program will be in a deadlock state and can never stop. At this point, one of the threads can be interrupted to end the meaningless wait between threads. The interrupted thread will throw an exception, and another thread will be able to get the lock and end normally. Example:
public class TestDemo { static Lock lock1 = new ReentrantLock(); static Lock lock2 = new ReentrantLock(); public static void main(String[] args) throws InterruptedException { Thread thread = new Thread(new ThreadDemo(lock1, lock2));//The thread acquires lock 1 first and then lock 2 Thread thread1 = new Thread(new ThreadDemo(lock2, lock1));//The thread acquires lock 2 first and then lock 1 thread.start(); thread1.start(); thread.interrupt();//Is the first thread interrupt } static class ThreadDemo implements Runnable { Lock firstLock; Lock secondLock; public ThreadDemo(Lock firstLock, Lock secondLock) { this.firstLock = firstLock; this.secondLock = secondLock; } @Override public void run() { try { firstLock.lockInterruptibly(); TimeUnit.MILLISECONDS.sleep(10);//Better trigger deadlock secondLock.lockInterruptibly(); } catch (InterruptedException e) { e.printStackTrace(); } finally { firstLock.unlock(); secondLock.unlock(); System.out.println(Thread.currentThread().getName()+"Normal end!"); } } } }
results:
1.5 timeout mechanism
in ReetrantLock, tryLock(long timeout, TimeUnit unit) provides the function of timeout to obtain the lock. Its semantics is to return true if a lock is obtained within a specified time, and false if it is not obtained. This mechanism prevents threads from waiting indefinitely for lock release.
public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException { return sync.tryAcquireNanos(1, unit.toNanos(timeout)); }
public final boolean tryAcquireNanos(int arg, long nanosTimeout) throws InterruptedException { if (Thread.interrupted()) throw new InterruptedException(); return tryAcquire(arg) || doAcquireNanos(arg, nanosTimeout); }
if the thread is interrupted, throw InterruptedException directly. If it is not interrupted, try to acquire the lock first. If the acquisition is successful, it will return directly. If the acquisition fails, enter doAcquireNanos.
/** * Compete for locks in a limited time * @return Is it successful */ private boolean doAcquireNanos(int arg, long nanosTimeout) throws InterruptedException { // Start time long lastTime = System.nanoTime(); // Thread queue final Node node = addWaiter(Node.EXCLUSIVE); boolean failed = true; try { // Spin again! for (;;) { // Get precursor node final Node p = node.predecessor(); // If the precursor is the head node and the lock is occupied successfully, the current node will become the head node if (p == head && tryAcquire(arg)) { setHead(node); p.next = null; // help GC failed = false; return true; } // If it has timed out, false is returned if (nanosTimeout <= 0) return false; // The timeout has not expired and needs to be suspended if (shouldParkAfterFailedAcquire(p, node) && nanosTimeout > spinForTimeoutThreshold) // Blocks the current thread until the timeout expires LockSupport.parkNanos(this, nanosTimeout); long now = System.nanoTime(); // Update nanosTimeout nanosTimeout -= now - lastTime; lastTime = now; if (Thread.interrupted()) //Corresponding interrupt throw new InterruptedException(); } } finally { if (failed) cancelAcquire(node); } }
the process of this method is briefly described as follows: the thread enters the waiting queue first, then starts to spin, tries to obtain the lock, and returns when it succeeds. If it fails, find a safe point in the queue and hang itself until the timeout expires. Why do we need a cycle here? Because the precursor state of the current thread node may not be SIGNAL, the thread will not be suspended in the current round of loop, and then update the timeout to start a new round of attempts.
examples of deadlock resolution using timeout mechanism:
public class TestDemo { static Lock lock1 = new ReentrantLock(); static Lock lock2 = new ReentrantLock(); public static void main(String[] args) throws InterruptedException { Thread thread = new Thread(new ThreadDemo(lock1, lock2));//The thread acquires lock 1 first and then lock 2 Thread thread1 = new Thread(new ThreadDemo(lock2, lock1));//The thread acquires lock 2 first and then lock 1 thread.start(); thread1.start(); } static class ThreadDemo implements Runnable { Lock firstLock; Lock secondLock; public ThreadDemo(Lock firstLock, Lock secondLock) { this.firstLock = firstLock; this.secondLock = secondLock; } @Override public void run() { try { while(!lock1.tryLock()){ TimeUnit.MILLISECONDS.sleep(10); } while(!lock2.tryLock()){ lock1.unlock(); TimeUnit.MILLISECONDS.sleep(10); } } catch (InterruptedException e) { e.printStackTrace(); } finally { firstLock.unlock(); secondLock.unlock(); System.out.println(Thread.currentThread().getName()+"Normal end!"); } } } }
results:
Thread-0 ends normally!
Thread-1 ends normally!
2, ReentrantReadWriteLock
2.1 features of reentrantreadwritelock
there are such scenarios: there are read and write operations on shared resources, and the write operations are less frequent than the read operations. When there is no write operation, there is no problem for multiple threads to read a resource at the same time, so multiple threads should be allowed to read shared resources at the same time; However, if a thread wants to write these shared resources, it should not allow other threads to read and write to the resource.
for this scenario, the ReentrantReadWriteLock can be used as a read-write lock, which represents two locks. One is a lock related to a read operation, which is called a shared lock; One is to write related locks, called exclusive locks. The read-write lock allows multiple read threads to access at the same time, but when the write thread accesses, all read threads and other write threads will be blocked.
read / write locks have the following three important features:
- Fair and unfair: support unfair (default) and fair lock acquisition methods. Throughput is still unfair rather than fair.
- Reentry: both read lock and write lock support thread reentry.
- Lock demotion: follow the sequence of obtaining a write lock, obtaining a read lock, and then releasing a write lock. A write lock can be demoted to a read lock.
if a thread wants to hold a write lock and a read lock at the same time, it must first obtain the write lock and then obtain the read lock; Write locks can be "downgraded" to read locks; Read locks cannot be "promoted" to write locks.
general structure of ReentrantReadWriteLock class:
public class ReentrantReadWriteLock implements ReadWriteLock, java.io.Serializable { /** Read lock */ private final ReentrantReadWriteLock.ReadLock readerLock; /** Write lock */ private final ReentrantReadWriteLock.WriteLock writerLock; final Sync sync; /** Create a new ReentrantReadWriteLock using the default (unfair) sort property */ public ReentrantReadWriteLock() { this(false); } /** Create a new ReentrantReadWriteLock using the given fairness policy */ public ReentrantReadWriteLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); readerLock = new ReadLock(this); writerLock = new WriteLock(this); } /** Returns the lock used for the write operation */ public ReentrantReadWriteLock.WriteLock writeLock() { return writerLock; } /** Returns the lock used for the read operation */ public ReentrantReadWriteLock.ReadLock readLock() { return readerLock; } abstract static class Sync extends AbstractQueuedSynchronizer {...} static final class NonfairSync extends Sync {...} static final class FairSync extends Sync {...} public static class ReadLock implements Lock, java.io.Serializable {...} public static class WriteLock implements Lock, java.io.Serializable {...} }
ReentrantReadWriteLock implements the ReadWriteLock interface. The ReadWriteLock interface defines the specifications for obtaining read locks and write locks, which need to be implemented by the implementation class. ReadWriteLock is very simple. It only defines two interfaces:
public interface ReadWriteLock { Lock readLock(); Lock writeLock(); }
ReentrantReadWriteLock has five internal classes:
Sync inherits from AQS, NonfairSync inherits from Sync class, FairSync inherits from Sync class; ReadLock implements the Lock interface, and WriteLock also implements the Lock interface.
2.2 Sync
Sync class has two internal classes, HoldCounter and ThreadLocalHoldCounter. HoldCounter is mainly used with read lock:
// Counter static final class HoldCounter { //Indicates the number of times a read thread re enters. It is used to count int count = 0; // Use id, not reference, to avoid garbage retention // Gets the value of the TID property of the current thread final long tid = getThreadId(Thread.currentThread()); }
source code of ThreadLocalHoldCounter:
// Local thread counter static final class ThreadLocalHoldCounter extends ThreadLocal<HoldCounter> { // Override the initialization method, and get the HoldCounter value without set ting public HoldCounter initialValue() { return new HoldCounter(); } }
properties of Sync class:
abstract static class Sync extends AbstractQueuedSynchronizer { // Version serial number private static final long serialVersionUID = 6317671515068378041L; // The upper 16 bits are read locks and the lower 16 bits are write locks static final int SHARED_SHIFT = 16; // Lock reading unit static final int SHARED_UNIT = (1 << SHARED_SHIFT); // Maximum number of read locks static final int MAX_COUNT = (1 << SHARED_SHIFT) - 1; // Maximum number of write locks static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1; // Local thread counter private transient ThreadLocalHoldCounter readHolds; // Cached counters private transient HoldCounter cachedHoldCounter; // First read thread private transient Thread firstReader = null; // Count of the first read thread private transient int firstReaderHoldCount; }
constructor of Sync class:
// Constructor Sync() { // Local thread counter readHolds = new ThreadLocalHoldCounter(); // Set AQS status setState(getState()); // ensures visibility of readHolds }
2.3 design of read-write status
in the implementation of reentrant lock, the synchronization state indicates the number of times repeatedly obtained by the same thread, that is, it is maintained by an integer variable. The state in ReentrantLock only indicates whether it is locked, regardless of whether it is a read lock or a write lock. However, the read-write lock needs to maintain the state of multiple read threads and one write thread in the synchronous state (an integer variable).
The implementation of read-write lock for synchronous state is through "bit cutting" on a shaping variable: the variable is cut into two parts, the high 16 bits represent read and the low 16 bits represent write.
assuming that the current synchronization status value is S, the operations of get and set are as follows:
- Get write status:
S & 0x0000ffff: erase all the upper 16 bits. - Get read status:
s > > > 16: unsigned complement 0, shift 16 bits to the right. - Write status plus 1:
S+1. - Read status plus 1:
S + (1 < < 16), i.e. S + 0x00010000.
2.4 acquisition and release of write lock
lock and unlock methods in WriteLock class:
public void lock() { sync.acquire(1); } public void unlock() { sync.release(1); }
2.4.1 acquisition of write lock
that is, the tryAcquire method in the Sync class:
protected final boolean tryAcquire(int acquires) { //Current thread Thread current = Thread.currentThread(); //Get status int c = getState(); //Number of write threads (i.e. the number of reentries to acquire exclusive locks) int w = exclusiveCount(c); //Current synchronization state= 0, indicating that another thread has obtained a read lock or write lock if (c != 0) { // The current state is not 0. At this time: if the write lock status is 0, the read lock is occupied and false is returned; // If the write lock status is not 0 and the write lock is not held by the current thread, false is returned if (w == 0 || current != getExclusiveOwnerThread()) return false; //Judge whether the same thread obtains the write lock more than the maximum number of times (65535), and support reentrant if (w + exclusiveCount(acquires) > MAX_COUNT) throw new Error("Maximum lock count exceeded"); //Update status //At this time, the current thread has a write lock and is now reentrant, so you only need to modify the number of locks. setState(c + acquires); return true; } //Here it is explained that at this time, c=0, the read lock and write lock are not obtained //writerShouldBlock indicates whether it is blocked if (writerShouldBlock() || !compareAndSetState(c, c + acquires)) return false; //Set the lock to be owned by the current thread setExclusiveOwnerThread(current); return true; } //Returns the number of threads occupying the write lock static int exclusiveCount(int c) { //Directly sum the States State and (2 ^ 16 - 1), which is equivalent to 2 ^ 16 on the state module. //This calculation is because the number of write locks is represented by the lower sixteen bits of state. return c & EXCLUSIVE_MASK; }
steps to obtain write lock:
- (1) First get c and w. c indicates the current lock state; w indicates the number of write threads. Then judge whether the synchronization state is 0. If state= 0, indicating that another thread has obtained a read lock or write lock, execute (2); Otherwise, execute (5).
- (2) If the lock status is not zero (c! = 0) and the write lock status is 0 (w = 0), it means that the read lock is occupied by other threads at this time, so the current thread cannot obtain the write lock and naturally returns false. Or if the lock state is not zero and the write lock state is not 0, but the thread that obtains the write lock is not the current thread, the current thread cannot obtain the write lock.
- (3) Judge whether the current thread has obtained the write lock more than the maximum number of times. If so, throw an exception, otherwise update the synchronization status (at this time, the current thread has obtained the write lock, and the update is thread safe), and return true.
- (4) If the state is 0, the read lock or write lock is not obtained at this time. Judge whether blocking is required (the implementation of fair and unfair methods are different). It will not be blocked under the unfair policy. Judge under the fair policy (judge whether there are threads with longer waiting time in the synchronization queue. If there are threads, they need to be blocked. Otherwise, they do not need to be blocked), If blocking is not required, CAS updates the synchronization status. If CAS succeeds, it returns true. If it fails, it indicates that the lock has been robbed by another thread and returns false. False is also returned if blocking is required.
- (5) After successfully obtaining the write lock, set the current thread as the thread occupying the write lock, and return true.
2.4.2 release of write lock
tryRelease method in Sync class:
protected final boolean tryRelease(int releases) { //If the lock holder is not the current thread, an exception is thrown if (!isHeldExclusively()) throw new IllegalMonitorStateException(); //Number of new threads to write lock int nextc = getState() - releases; //If the exclusive mode reentry number is 0, the exclusive mode is released boolean free = exclusiveCount(nextc) == 0; if (free) //If the number of new threads of the write lock is 0, the lock holder is set to null setExclusiveOwnerThread(null); //Sets the number of new threads to write lock //Update exclusive reentry count whether exclusive mode is released or not setState(nextc); return free; }
release process of write lock: first check whether the current thread is the holder of write lock. If not, throw an exception. Then check whether the number of threads writing the lock after release is 0. If it is 0, it means that the write lock is idle. Release the lock resource and set the lock holding thread to null. Otherwise, releasing the lock is only a re-entry lock and cannot empty the write lock thread.
2.5 acquisition and release of read lock
2.5.1 acquisition of read lock
that is, the tryAcquireShared method in the Sync class:
protected final int tryAcquireShared(int unused) { // Get current thread Thread current = Thread.currentThread(); // Get status int c = getState(); //If the number of write lock threads= 0, and the exclusive lock is not the current thread, a failure is returned because there is a lock degradation if (exclusiveCount(c) != 0 && getExclusiveOwnerThread() != current) return -1; // Number of read locks int r = sharedCount(c); /* * readerShouldBlock():Whether to wait for lock reading (fair lock principle) * r < MAX_COUNT: The number of threads held is less than the maximum (65535) * compareAndSetState(c, c + SHARED_UNIT): Set read lock status */ // Whether the read thread should be blocked, less than the maximum value, and the comparison setting is successful if (!readerShouldBlock() && r < MAX_COUNT && compareAndSetState(c, c + SHARED_UNIT)) { //r == 0 indicates the first read lock thread. The first read lock firstRead will not be added to readHolds if (r == 0) { // The number of read locks is 0 // Set first read thread firstReader = current; // The number of resources occupied by the read thread is 1 firstReaderHoldCount = 1; } else if (firstReader == current) { // The current thread is the first read thread, indicating that the first read lock thread re enters // Number of resources occupied plus 1 firstReaderHoldCount++; } else { // The number of read locks is not 0 and is not the current thread // Get counter HoldCounter rh = cachedHoldCounter; // The counter is empty or the tid of the counter is not the tid of the currently running thread if (rh == null || rh.tid != getThreadId(current)) // Gets the counter corresponding to the current thread cachedHoldCounter = rh = readHolds.get(); else if (rh.count == 0) // The count is 0 //Add to readHolds readHolds.set(rh); //Count + 1 rh.count++; } return 1; } return fullTryAcquireShared(current); }
2.5.2 release of read lock
tryrereleaseshared method in Sync class:
protected final boolean tryReleaseShared(int unused) { // Get current thread Thread current = Thread.currentThread(); if (firstReader == current) { // The current thread is the first read thread // assert firstReaderHoldCount > 0; if (firstReaderHoldCount == 1) // The number of resources occupied by the read thread is 1 firstReader = null; else // Reduce occupied resources firstReaderHoldCount--; } else { // The current thread is not the first read thread // Get cached counters HoldCounter rh = cachedHoldCounter; if (rh == null || rh.tid != getThreadId(current)) // The counter is empty or the tid of the counter is not the tid of the currently running thread // Gets the counter corresponding to the current thread rh = readHolds.get(); // Get count int count = rh.count; if (count <= 1) { // Count less than or equal to 1 // remove readHolds.remove(); if (count <= 0) // If the count is less than or equal to 0, an exception is thrown throw unmatchedUnlockException(); } // Decrease count --rh.count; } for (;;) { // Infinite loop // Get status int c = getState(); // Get status int nextc = c - SHARED_UNIT; if (compareAndSetState(c, nextc)) // Compare and set // Releasing the read lock has no effect on readers, // but it may allow waiting writers to proceed if // both read and write locks are now free. return nextc == 0; } }
this method indicates that the lock reading thread releases the lock. First, judge whether the current thread is the first reading thread firstReader. If so, judge whether the number of resources occupied by the first reading thread firstReaderHoldCount is 1. If so, set the first reading thread firstReader to null. Otherwise, reduce the number of resources occupied by the first reading thread firstReaderHoldCount by 1; If the current thread is not the first read thread, the cache counter (the counter corresponding to the last read lock thread) will be obtained first. If the counter is empty or tid is not equal to the tid value of the current thread, the counter of the current thread will be obtained. If the counter count is less than or equal to 1, the counter corresponding to the current thread will be removed, If the count count of the counter is less than or equal to 0, an exception is thrown, and then the count can be reduced. In either case, an infinite loop is entered, which ensures that the state state is successfully set.
2.6 use of read-write lock
- 1. Basic code
public class ReadWriteLockTest { private ReentrantReadWriteLock rw1 = new ReentrantReadWriteLock(); //Get write lock public void getW(Thread thread) { try { rw1.writeLock().lock(); long start = System.currentTimeMillis(); while (System.currentTimeMillis() - start <= 10){ System.out.println(thread.getName() + "Writing"); } System.out.println(thread.getName() + "Write operation completed"); } catch (Exception e) { e.printStackTrace(); } finally { rw1.writeLock().unlock(); } } //Acquire read lock public void getR(Thread thread) { try { rw1.readLock().lock(); long start = System.currentTimeMillis(); while (System.currentTimeMillis() - start <= 10){ System.out.println(thread.getName() + "Reading operation"); } System.out.println(thread.getName() + "Read operation completed"); } catch (Exception e) { e.printStackTrace(); } finally { rw1.readLock().unlock(); } } }
- 2. Concurrent read
public static void main(String[] args) { final ReadWriteLockTest test = new ReadWriteLockTest(); new Thread(){ @Override public void run() { test.getR(Thread.currentThread()); } }.start(); new Thread(){ @Override public void run() { test.getR(Thread.currentThread()); } }.start(); }
results:
Thread-1 reading operation
Thread-0 is reading
Thread-1 read operation completed
Thread-0 read operation completed
you can see that there is no queuing between read threads.
- 3. Concurrent write
public static void main(String[] args) { final ReadWriteLockTest test = new ReadWriteLockTest(); new Thread(){ @Override public void run() { test.getW(Thread.currentThread()); } }.start(); new Thread(){ @Override public void run() { test.getW(Thread.currentThread()); } }.start(); }
results:
it can be seen that the lock obtained by the write thread is mutually exclusive.
- 3. Concurrent read and write
public static void main(String[] args) { final ReadWriteLockTest test = new ReadWriteLockTest(); new Thread(){ @Override public void run() { test.getR(Thread.currentThread()); } }.start(); new Thread(){ @Override public void run() { test.getW(Thread.currentThread()); } }.start(); }
results:
it can be seen that the lock obtained by the read-write thread is also mutually exclusive.
3, Some problems related to Lock
here, the basic use of explicit Lock and implicit Lock has been introduced. Next, let's look at some Lock related problems.
3.1 what is the difference between synchronized and Lock?
because the most common implementation of Lock is ReentrantLock, ReentrantLock will be interspersed below for comparison.
synchronized and ReentrantLock have the same point: they are used to coordinate multiple threads. Access to shared objects and variables is a reentrant lock. The same thread can obtain the same lock multiple times, ensuring visibility and mutual exclusion.
specific differences:
- 1. Underlying implementation
synchronized is a keyword and belongs to the JVM level. The bottom layer is implemented by a pair of monitorenter and monitorexit instructions.
ReentrantLock is a concrete class, which is a lock at the API level.
synchronized implicitly obtains and releases locks, and ReentrantLock explicitly obtains and releases locks. - 2. Whether the lock is automatically released
synchronized does not require the user to manually release the lock. When the synchronized code block is executed, the system will automatically let the thread release the occupation of the lock. If an exception occurs during thread execution, the JVM will let the thread release the lock.
ReentrantLock needs to release the lock manually in the finally block. If it is not released manually, it may lead to deadlock. - 3. Can you judge whether the lock has been obtained
synchronized cannot be judged.
Lock can judge. - 4. Is locking fair
synchronized unfair lock.
ReentrantLock can be both. It is a non fair lock by default. - 5. Can there be multiple wake-up conditions
synchronized cannot.
ReentrantLock can be used to group the threads that need to be awakened. Instead of waking up one thread at random or all threads like synchronized. - 6. Is it interruptible
synchronized cannot be interrupted unless an exception is thrown or normal operation is completed.
ReentrantLock is interruptible. - 7. Can timeout be set
the timeout cannot be set for synchronized lock acquisition;
ReentrantLock can set the timeout for obtaining locks. - 8. Pessimistic and optimistic strategy
synchronized is synchronization blocking and uses pessimistic concurrency strategy;
Lock is synchronous non blocking and adopts optimistic concurrency strategy.
3.2 what are the implementation methods of optimistic lock and pessimistic lock?
- Pessimistic lock
always assume the worst case. Every time you go to get the data, you think others will modify it, so you lock it every time you get the data, so others will block the data until they get the lock. The implementation of the synchronized keyword is a pessimistic lock. - Optimistic lock
every time I go to get the data, I think others will not modify it, so I won't lock it. However, when updating, I will judge whether others have updated the data during this period. I can use mechanisms such as version number. Optimistic locking is suitable for multi read applications, which can improve throughput. In Java, the atomic variable class under the JUC package is implemented by CAS, an implementation of optimistic locking. - Implementation of pessimistic lock
for example, synchronized is implemented by issuing the monitorenter and monitorexit instructions. - Implementation of optimistic lock
- The version ID is used to determine whether the read data is consistent with the data at the time of submission. Modify the version ID after submission. In case of inconsistency, discard and retry strategies can be adopted.
- CAS.
3.3 is spin lock more efficient than heavyweight lock?
spin lock reduces thread blocking as much as possible, which can greatly improve the performance of code blocks that do not have fierce lock competition and occupy a very short lock time, because the consumption of spin is less than that of thread blocking and pending operations.
if the lock competition is fierce, or the thread holding the lock needs to occupy the lock for a long time to execute the synchronization block, it is not suitable to use the spin lock, because the spin lock always occupies the CPU for useless work before obtaining the lock. The consumption of thread spin is greater than that of thread blocking and pending operation, and other threads that need CPU can not obtain the CPU, resulting in CPU waste.