introduction
In our previous article< Deeply understand the lock free CAS mechanism of Java Concurrent Programming >If the CAS mechanism mentioned in is the basis of the whole Java Concurrent Programming, the AQS described in this chapter is the core of the whole Java JUC. However, before learning AQS, you need to have a certain knowledge reserve of CAS mechanism, because the implementation of CAS in ReetrantLock and AQS can be seen everywhere.
1, Lock interface in JUC
At the beginning of our concurrent programming article, we described its solutions around thread safety. In the previous article, we mentioned a variety of solutions such as CAS Lock free mechanism and synchronized keyword. Among them, CAS mechanism belongs to optimistic Lock type and synchronized keyword belongs to pessimistic Lock type, The ReetrantLock implementation based on AQS that we will talk about in this chapter is also an implementation of pessimistic Lock. However, it is different from the synchronized keyword we talked about before. The synchronized keyword belongs to an implicit Lock. The acquisition and release of locks are implicit and do not require developer intervention. In this chapter, we will talk about explicit locks, that is, the acquisition and release of locks need to be realized by manual coding. In JDK1.5, the official added a Lock interface to the Java.uitl.concurrent concurrent package, which defines two methods, lock() [acquire Lock] and unlock() [release Lock], to support the locking and unlocking operations of explicit locks. Explicit locks are used as follows:
Lock lock = new ReetrantLock(); //Create lock object lock.lock(); //Acquire lock operation try{ //Code blocks requiring lock decoration } finally{ lock.unlock(); //Release lock operation }
When the above code is running, after the current thread executes the lock() method, it means that the current thread occupies the Lock resources. Before the current thread does not execute the unlock() method, other threads cannot enter the Lock decorated code block for execution because they cannot obtain the Lock resources, so they will be blocked until the current thread releases the Lock. However, during the coding process, we should note that the unlock() method must be placed in the finally code block, which can ensure that even if an exception is thrown during the execution of the locked code, the thread can finally release the Lock resources and avoid the deadlock caused by the program. Of course, in addition to defining the lock() and unlock() methods, the Lock interface also provides the following related methods:
/** * Acquire lock: * If the current lock resource is idle and available, get the lock resource and return, * If it is not available, block and wait, and constantly compete for lock resources until the lock is obtained and returned. */ void lock(); /** * Release lock: * After the current thread completes the business, it changes the status of lock resources from occupied to available, and notifies the blocking thread. */ void unlock(); /** * Acquire lock: (different from lock method in that it can respond to interrupt operation, that is, it can be interrupted during the process of acquiring lock) * If the current lock resource is available, obtaining the lock returns. * If the current lock resource is unavailable, it will block until the following two conditions occur: * 1.The current thread gets the lock resource. * 2.After receiving the interrupt command, the current thread interrupts the lock acquisition operation. */ void lockInterruptibly() throws InterruptedException; /** * Non blocking acquire lock: * Try to acquire a lock in a non blocking manner. Call this method to acquire the lock and immediately return the acquisition result. * If the lock is obtained, it returns true; otherwise, it returns flash. */ boolean tryLock(); /** * Non blocking acquire lock: * Obtain the lock according to the incoming time. If the thread does not obtain the lock within this time period, it returns false. * If the current thread obtains the lock within this time period and is not interrupted, it returns true. */ boolean tryLock(long time, TimeUnit unit) throws InterruptedException; /** * Get the wait notification component (which is bound to the current lock resource): * The current thread can call the wait() method of the component only after obtaining the lock resource, * After the current thread calls the await() method, the current thread will release the lock. */ Condition newCondition();
By analyzing the methods provided by the Lock interface above, we can know that the Lock lock provides many features that the synchronized Lock does not have, as follows:
- ① Obtain lock interrupt operation (synchronized keyword does not support obtaining lock interrupt);
- ② Non blocking lock acquisition mechanism;
- ③ Timeout interrupt access lock mechanism;
- ④ Multi Condition wait wake-up mechanism, Condition, etc.
2, Implementer of Lock interface: ReetrantLock reentry Lock
ReetrantLock, a class added under the JUC package in JDK1.5, is implemented on the Lock interface and has the same function as synchronized. However, compared with synchronized, it is more flexible, but we need to manually obtain / release the Lock.
ReetrantLock itself is a lock that supports reentry, that is, it supports the thread that currently obtains the lock to repeatedly obtain the lock resource. At the same time, it also supports fair locks and unfair locks. Fairness and unfairness here refer to the order in which lock resources are acquired after the lock acquisition operation is executed. If the thread executing the lock acquisition operation first acquires the lock, it means that the current lock is fair. On the contrary, if the thread executing the lock acquisition operation first also needs to compete for lock resources with the thread executing the lock acquisition operation later, it means that the current lock is unfair. It is worth noting here that although threads compete for lock resources in unfair locks, generally speaking, the efficiency of unfair locks far exceeds that of fair locks in most cases. However, in some special business scenarios, such as paying more attention to the sequence of lock resource acquisition, fair lock is the best choice. We also mentioned earlier that ReetrantLock supports lock reentry, that is, the current thread can perform lock acquisition multiple times, but what we need to understand in the process of using ReetrantLock is: how many lock acquisition operations does ReetrantLock perform and how many lock release operations do it need to perform. The cases are as follows:
import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class Task implements Runnable { public static Lock lock = new ReentrantLock(); public static int count = 0; @Override public void run() { for (int i = 0; i<10000;i++){ lock.lock(); // Acquire lock for the first time lock.lock(); // Acquire lock for the second time try { count++; // Non atomic operations: thread safety issues } finally { lock.unlock(); // First release lock lock.unlock(); // Second release lock } } } public static void main(String[] args) throws InterruptedException { Task task = new Task(); Thread t1 = new Thread(task); Thread t2 = new Thread(task); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(count); // Execution result: 20000 } }
The above example is very simple. T1 and T2 threads simultaneously perform + + non atomic operations on the shared resource count. Here, we use the ReentrantLock lock lock to solve the existing thread safety problem. At the same time, in the above code, we obtained the lock resources twice. Because ReentrantLock supports lock reentry, there is no problem obtaining the lock twice at this time. However, when releasing the lock resources in finally, we should pay attention to: we should also unlock the lock twice. From the analysis of the above cases, we can find that the usage of ReentrantLock is relatively simple. Next, we can also analyze some methods provided by ReentrantLock to facilitate a more comprehensive understanding of it. As follows:
// Query the number of times the current thread calls lock() int getHoldCount() // Returns the thread that currently holds the lock. If the lock is not held by any thread, null is returned protected Thread getOwner(); // Returns a collection containing threads that may be waiting to acquire this lock, and maintains a queue within it (subsequent analysis) protected Collection<Thread> getQueuedThreads(); // Returns the estimated number of threads waiting to acquire this lock resource int getQueueLength(); // Returns a collection of threads (estimates) that may be waiting for Condition conditions associated with this lock protected Collection<Thread> getWaitingThreads(Condition condition); // Returns the estimated number of threads that did not execute the signal() method after calling the await method of the current lock resource Condition object int getWaitQueueLength(Condition condition); // Queries whether the specified thread is waiting to acquire the current lock resource boolean hasQueuedThread(Thread thread); // Queries whether a thread is waiting to acquire the current lock resource boolean hasQueuedThreads(); // Query whether a thread is waiting for a Condition condition associated with this lock boolean hasWaiters(Condition condition); // Returns the current lock type. If it is a fair lock, it returns true. Otherwise, it returns false boolean isFair() // Query whether the current thread holds the current lock resource boolean isHeldByCurrentThread() // Query whether the current lock resource is held by the thread boolean isLocked()
Through observation, it is not difficult for us to know that ReentrantLock, as the implementer of the Lock interface, not only implements the methods defined by the Lock interface, but also expands some other methods. We can familiarize ourselves with the functions of some other methods of ReentrantLock through a simple case. The cases are as follows:
import lombok.SneakyThrows; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantLock; public class Task implements Runnable { public static ReentrantLock lock = new ReentrantLock(); public static int count = 0; // Simple use case of ReentrantLock @SneakyThrows @Override public void run() { for (int i = 0; i < 10000; i++) { lock.lock(); // First blocking lock acquisition lock.tryLock(); // Second non blocking lock acquisition lock.tryLock(10,TimeUnit.SECONDS); // Third non blocking wait lock acquisition try { count++; // Non atomic operations: thread safety issues } finally { lock.unlock(); // First release lock lock.unlock(); // Second release lock lock.unlock(); // Third release lock } } } public void reentrantLockApiTest() { lock.lock(); // Acquire lock try { //Gets the number of times the lock() method was called by the current thread System.out.println("Thread:" + Thread.currentThread().getName() + "\t call lock()Times:" + lock.getHoldCount()); // Judge whether the current lock is a fair lock System.out.println("Is the current lock resource type a fair lock?" + lock.isFair()); // Gets the estimated number of threads waiting to acquire the current lock resource System.out.println("Currently:" + lock.getQueueLength() + "Threads are waiting to acquire lock resources!"); // Specifies whether the thread is waiting to acquire the current lock resource System.out.println("Is the current thread waiting to acquire the current lock resource?" + lock.hasQueuedThread(Thread.currentThread())); // Judge whether there is a thread waiting to acquire the current lock resource System.out.println("Is there a thread waiting to acquire the current lock resource?" + lock.hasQueuedThreads()); // Judge whether the current thread holds the current lock resource System.out.println("Does the current thread hold the current lock resource?" + lock.isHeldByCurrentThread()); // Judge whether the current lock resource is held by the thread System.out.println("Is the current lock resource occupied by the thread?" + lock.isLocked()); } finally { lock.unlock(); // Release lock } } public static void main(String[] args) throws InterruptedException { Task task = new Task(); Thread t1 = new Thread(task); Thread t2 = new Thread(task); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(count); // Execution result: 20000 /** * Execution results: * Thread: main Number of calls to lock(): 1 * Is the current lock resource type a fair lock? false * Currently: 0 threads are waiting to acquire lock resources! * Is the current thread waiting to acquire the current lock resource? false * Is there a thread waiting to acquire the current lock resource? false * Does the current thread hold the current lock resource? true * Is the current lock resource occupied by the thread? true */ task.reentrantLockApiTest(); } }
Through the above simple case, we can see that the use of ReentrantLock lock is still relatively simple, so our application of ReentrantLock will come to an end for the time being. Next, we will take you step by step to analyze the internal implementation principle of ReentrantLock. In fact, ReentrantLock is implemented based on AQS framework, So before studying the internal implementation of ReentrantLock, let's take you to have an in-depth understanding of AQS.
3, JUC concurrent kernel: concurrent basic component AQS
The full name of AQS is abstractqueuedsynchronizer (Abstract queue synchronizer). It is the core basic component in Java concurrency package. It is the basic framework used to build semaphores, locks, gate valves and other synchronization components.
Brief description of AQS working principle
Before< Thoroughly understand the implementation principle of Synchronized keyword in Java Concurrent Programming >As mentioned in, the implementation of the bottom layer of synchronized heavyweight lock is based on the counter in the ObjectMonitor object, and there are similarities in AQS. It internally controls the synchronization state through an int type global variable state modified with volatile keyword. When the status ID state is 0, it means that no thread currently occupies lock resources. On the contrary, when the status ID state is not 0, it means that the lock resources have been held by the thread. Other threads that want to obtain lock resources must enter the synchronization queue and wait for the thread currently holding the lock to release. AQS builds a FIFO (first in, first out) synchronization queue through the internal class Node to process threads that have not obtained lock resources, and adds threads waiting to obtain lock resources to the synchronization queue for queuing. At the same time, AQS uses the internal class ConditionObject to build the waiting queue. When the Condition calls the await() method, the thread waiting to obtain the lock resources will join the waiting queue. When the Condition calls the signal() method, the thread will transfer from the waiting queue to the synchronization queue for lock resource competition. It is worth noting that there are two types of queues:
① Synchronous queue: when a thread obtains lock resources and finds that it has been occupied by other threads, it joins the queue;
② Waiting queue (there may be multiple): the queue added after Condition calls await() method;
We should not confuse the two in our understanding. We can first analyze the synchronization queue in AQS. The AQS synchronization queue model is as follows:
public abstract class AbstractQueuedSynchronizer extends AbstractOwnableSynchronizer{ // Points to the header of the synchronization queue private transient volatile Node head; // Points to the end of the synchronization queue private transient volatile Node tail; // Synchronization status identification private volatile int state; // Omit }
Where head and tail are AQS global variables, where head points to the head of the synchronization queue, but it should be noted that the head node is empty and does not store information, while tail points to the tail of the synchronization queue. The synchronization queue in AQS uses this method to build a two-way linked list structure to facilitate the addition and deletion of nodes in the queue. State is the synchronization status identifier we mentioned earlier. When the thread invokes the lock() method of the lock during execution, if state=0, the current lock resource is not acquired by other threads. The current thread sets the state value to 1, indicating that the lock is successful. If state=1, it means that the current lock resource has been obtained by other threads, and the current thread will be encapsulated as a node node and join the synchronization queue to wait. Node node is the encapsulation of each thread that obtains lock resources, including the thread itself currently executing and the status of the thread, such as whether it is blocked, whether it is waiting to wake up, whether it is interrupted, etc. Each node is associated with the predecessor node prev and the successor node next, so that the thread holding the lock can quickly release the next waiting thread after it is released. The node class structure is as follows:
static final class Node { // Sharing mode static final Node SHARED = new Node(); // Exclusive Mode static final Node EXCLUSIVE = null; // The identity thread is already in the end state static final int CANCELLED = 1; // Waiting to be awakened static final int SIGNAL = -1; // Condition condition status static final int CONDITION = -2; // When used in shared mode, the synchronization status obtained is propagated static final int PROPAGATE = -3; // There are four waiting states: CANCELLED, SIGNAL, CONDITION and PROPAGATE volatile int waitStatus; // Synchronize the precursor node in the queue volatile Node prev; // Subsequent nodes in the synchronization queue volatile Node next; // The thread that gets the lock resource volatile Thread thread; // Subsequent nodes in the waiting queue (related to Condition, which will be analyzed later) Node nextWaiter; // Determine whether it is shared mode final boolean isShared() { return nextWaiter == SHARED; } // Get precursor node final Node predecessor() throws NullPointerException { Node p = prev; if (p == null) throw new NullPointerException(); else return p; } // Omit code }
SHARED and EXCLUSIVE global constants respectively represent SHARED mode and EXCLUSIVE mode. SHARED mode allows multiple threads to operate on a lock resource at the same time. For example, Semaphore semaphore and read lock ReadLock are implemented in AQS based SHARED mode. The EXCLUSIVE mode means that only one thread is running at the same time to operate the lock resources. The implementation of components such as ReentranLock is based on AQS EXCLUSIVE mode. The global variable waitStatus represents the status of the thread currently encapsulated as a Node node. There are five situations:
- 0 initial value status: waitStatus=0, representing node initialization.
- Canceled status: waitStatus=1. If the thread waiting in the synchronization queue times out or is interrupted, the Node of the Node needs to be CANCELLED from the synchronization queue. The waitStatus of the Node is canceled. The Node in this status indicates that it has entered the end state, and the current Node will not change again.
- SIGNAL status: waitStatus=-1. When the thread of its predecessor node releases the lock resource or is cancelled, the node's thread will be notified to execute. Simply put, the node marked as the current state is in the wake-up waiting state. As long as the precursor node releases the lock, it will notify the thread of the subsequent node identified as the SIGNAL state to execute.
- CONDITION condition status: waitStatus=-2, related to CONDITION. It is indicated that the node in this status is in the waiting queue, and the thread of the node is waiting in the CONDITION condition. When other threads call the signal() method of CONDITION, the node in CONDITION status will be transferred from the waiting queue to the synchronization queue, waiting to obtain the contention lock resource.
- PROPAGATE propagation status: waitStatus=-3. This status is related to the sharing mode. In the sharing mode, the thread of the node identified as this status is in the runnable state.
The global variables pre and next respectively represent the predecessor Node and successor Node corresponding to the current Node, and thread represents the currently encapsulated thread object. nextWaiter represents the successor Node of the current Node in the waiting queue (related to Condition, which will be analyzed later). In fact, we have a general understanding of the structure of Node data types. In short, as the core component of JUC, AQS has two different implementations for locks, namely, exclusive mode (such as ReetrantLock) and shared mode (such as Semaphore). However, the implementation classes in both exclusive mode and shared mode are implemented based on AQS, and a queue is maintained inside. When the number of threads trying to obtain locks exceeds the limit of the current mode, the threads will be encapsulated as a Node node and added to the queue for waiting. This series of operations are completed by AQS for us. Whether ReetrantLock or Semaphore, in fact, most of their methods are finally completed by calling AQS directly or indirectly. The following is the AQS overall class diagram structure:
- AbstractOwnableSynchronizer abstract class: internally defines the methods to store the current lock resource thread and obtain the stored thread information.
- AbstractQueuedSynchronizer abstract class: AQS refers to the acronym of AbstractQueuedSynchronizer, which is the core class of the whole AQS framework. The internal thread implements the tryacquire and tryrelease of lock resources in the form of virtual queue, but there is no default implementation of the operations of lock acquisition and lock release in AQS. The specific logic needs subclass implementation, which enables us to use it more flexibly in the development process.
- Node internal class: an internal class in AbstractQueuedSynchronizer. It is used to build a virtual queue inside AQS, which is convenient for AQS to manage threads that need to obtain locks.
- Sync internal abstract class: the internal class of ReentrantLock inherits the AbstractQueuedSynchronizer class and implements the lock resource acquisition and release methods defined by it. At the same time, it also defines the lock() method and provides it to the subclass implementation.
- NonfairSync internal class: the internal class of ReentrantLock, which inherits the Sync class and is the implementer of unfair lock.
- FairSync internal class: the internal class of ReentrantLock, which inherits the Sync class and is the implementer of fair lock.
- Lock interface: the top-level interface of Java lock class, which defines a series of lock operation methods, such as lock(), unlock(), tryLock, etc.
- ReentrantLock: the implementer of Lock interface. There are three internal classes: Sync, NonfairSync and FairSync. When creating, you can decide to use fair Lock / unfair Lock according to its internal fair parameters. Most of its internal operations are based on indirectly calling AQS methods.
We can see from the above class diagram that AQS is an abstract class, but there is no abstract method in its source code implementation. This is because the original intention of AQS design is more inclined to be a basic component and does not want to be directly output as an operation class to provide infrastructure for the real implementation class, such as building synchronization queue, controlling synchronization status, etc. From the perspective of design patterns, AQS is built in the template pattern. In addition to providing core methods for concurrent operations and synchronous queue operations, it also provides some template methods for subclasses to implement, such as locking and unlocking operations. Why? This is because AQS, as a basic component, encapsulates core concurrent operations, but its implementation is divided into two modes, namely, shared mode and exclusive mode. The locking and unlocking implementation methods of these two modes are different. However, AQS only focuses on the implementation of internal public methods and does not care about the specific logic implementation of different external modes, so it provides template methods for subclasses, That is to say, to implement exclusive lock, for example, ReentrantLock needs to implement tryAcquire() method and tryRelease() method, while to implement Semaphore in shared mode, it needs to implement tryacquiresered() method and tryrereleaseshared() method. The benefits of this are obvious and easy to see. The basic implementation of both shared mode and exclusive mode is the same set of components (AQS), However, the logic of locking / unlocking is different. More importantly, if we need to customize the lock, it becomes very simple. We only need to select different modes to implement different locking and unlocking template methods. The template methods provided by AQS for exclusive mode and shared mode are as follows:
//Method of obtaining lock in exclusive mode protected boolean tryAcquire(int arg) { throw new UnsupportedOperationException(); } //Method of releasing lock in exclusive mode protected boolean tryRelease(int arg) { throw new UnsupportedOperationException(); } //Method of obtaining lock in sharing mode protected int tryAcquireShared(int arg) { throw new UnsupportedOperationException(); } //Method of releasing lock in sharing mode protected boolean tryReleaseShared(int arg) { throw new UnsupportedOperationException(); } //Method to determine whether an exclusive lock is held protected boolean isHeldExclusively() { throw new UnsupportedOperationException(); }
So far, we have a general understanding of the principle of AQS, a concurrent core component. Next, we will take you to further analyze the specific implementation process of AQS based on ReetrantLock.
4, Analyzing the implementation process and principle of AQS exclusive mode based on ReetrantLock
4.1. NonfairSync unfair lock in ReetrantLock
AQS synchronizer manages the synchronization status identifier state based on the synchronization queue of its internal FIFO two-way linked list. When a thread fails to acquire a lock, the AQS synchronizer will encapsulate the thread itself and its related information into a Node and join the synchronization queue. At the same time, it will block the current thread. Until the synchronization status ID state is released, AQS will wake up the program in the head Node of the synchronization queue to try to modify the state ID to acquire the lock. Let's focus on the specific logic of obtaining locks, releasing locks and encapsulating threads into nodes to join the queue. Here, we first analyze the specific implementation of AQS from the perspective of ReetrantLock unfair locks.
// Constructor: the lock created by default belongs to non fair lock (NonfairSync) type public ReentrantLock() { sync = new NonfairSync(); } // Constructor: creates a lock type based on the passed in parameters (true fair lock / false non fair lock) public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); } // Lock / acquire operation public void lock() { sync.lock(); }
4.1.1 principle analysis of lock() method in ReetrantLock
Let's start from the perspective of unfair lock:
/** * Unfair lock class < sync subclass > */ static final class NonfairSync extends Sync { // Lock final void lock() { // Execute CAS operation, modify synchronization status ID and obtain lock resources // Because multiple threads may modify at the same time, CAS operations need to be used to ensure atomicity if (compareAndSetState(0, 1)) // If successful, set the exclusive lock thread to the current thread setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); // Otherwise, request synchronization status again } }
In the NonfairSync class, the implementation process of obtaining the lock is as follows: first, cas the state and try to modify the synchronization status ID from 0 to 1. If it is successful, return true, indicating that the synchronization status is successfully obtained and the lock resources are successfully obtained. Then set the exclusive lock thread to the thread currently obtaining the synchronization status. On the contrary, if false, it means that the lock acquisition fails. When false is returned, the acquire(1) method is executed. This method is not sensitive to thread interrupt operation, which means that even if the current thread fails to acquire the lock and is added to the synchronization queue for waiting, the current thread will not be removed from the synchronization queue after the interrupt operation is executed on the current thread. acquire(1) is as follows:
public final void acquire(int arg) { // Try to get synchronization status again if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }
acquire() is the method provided in AQS. The parameter arg passed in here represents the value set after obtaining the synchronization status (that is, the value of state to be set, and when state is 0, it is the lock resource release status, and 1 is the lock resource occupation status). Because the lock is to be obtained, the parameter passed here is generally 1. After entering the method, the tryAcquire(arg) method will be executed first, In the previous analysis, we found that AQS implements this method by subclasses, so the tryAcquire(arg) method of NonfairSync is implemented by the internal Sync class of the resetrantlock class. The code is as follows:
// NonfairSync class static final class NonfairSync extends Sync { protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); } } // ReetrantLock class internal class - Sync class abstract static class Sync extends AbstractQueuedSynchronizer { // NonfairTryAcquire method final boolean nonfairTryAcquire(int acquires) { // Gets the status identification value of the current execution thread and the current synchronizer final Thread current = Thread.currentThread(); int c = getState(); // Judge whether the synchronization status is 0 and try to get the synchronization status again if (c == 0) { //Performing a CAS operation attempted to modify the synchronization ID if (compareAndSetState(0, acquires)) { // If true, the exclusive lock thread is set to the current thread setExclusiveOwnerThread(current); return true; } } // If the current thread has acquired the lock, it is a reentrant lock. After acquiring the lock again, increase the state value by 1 else if (current == getExclusiveOwnerThread()) { // Automatically increment the current state value int nextc = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); // Set the current synchronization state. Currently, only one thread holds the lock, because thread safety problems will not occur // Problem, you can directly execute setState(nextc); setState(nextc); return true; } return false; } //Omit }
By analyzing the above code, we can know that two things are done in the nonfairtryacquire (acquire) method of non fair lock:
- 1, Try to modify the synchronization ID again to obtain the lock resource (because there may be a thread that obtained the lock last time before the current thread failed to obtain the lock. If successful, set the exclusive lock thread to the thread that currently obtained the synchronization state, and finally return true.
- 2, Judge whether the current thread is an exclusive lock thread OwnerThread. If yes, it means that the current thread has obtained the lock and the resource has not been released. It belongs to lock reentry. Then increase the state by 1 and return true.
- If the previous two judgments of the current thread are not satisfied, false is returned, which means that the execution of nonfairtry acquire (acquisitions) ends.
However, in this method, it is worth noting that the cas operation is used to modify the state synchronization ID in the nonfairtryacquire (acquisitions) method to ensure thread safety. Therefore, as long as any thread calls the nonfairtryacquire (acquisitions) method and sets it successfully, it can obtain the lock, whether the thread is a new thread or a thread already in the synchronization queue. After all, this is an unfair lock, It is not guaranteed that the thread in the synchronization queue must obtain the lock before the newly arrived thread request (the head node may just release the synchronization state, and then the newly arrived thread just obtains the synchronization state). This is different from the fair lock that will be analyzed later. So let's go back to the acquire(1) method called in the lock() method of the NonfairSync class before:
public final void acquire(int arg) { // Try to get synchronization status again if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }
Here, if the tryAcquire(arg) can successfully obtain the lock and return true after execution, the if will not continue to execute. This is the most ideal state. However, if the tryAcquire(arg) returns false, the addWaiter(Node.EXCLUSIVE) encapsulation thread listing operation will continue (because the resetrantlock is an exclusive lock, the Node node type belongs to Node.EXCLUSIVE). The addWaiter method code is as follows:
private Node addWaiter(Node mode) { // Encapsulate the thread that failed to request synchronization status as a Node node Node node = new Node(Thread.currentThread(), mode); Node pred = tail; // If it is the first node to join, it must be empty. Skip. // If it is not the first node, directly perform CAS queue operation and try to quickly add at the tail if (pred != null) { node.prev = pred; // Use CAS to perform tail node replacement and try to quickly add in the tail if (compareAndSetTail(pred, node)) { pred.next = node; return node; } } // If the first join or CAS operation is not successful, enq queue operation is performed enq(node); return node; }
In the addWaiter() method, first encapsulate the current thread and the incoming Node type Node.EXCLUSIVE into a Node node, and then assign the global variable tail in AQS (pointing to the Node at the end of the synchronization queue maintained in AQS) to pred for judgment. If the end of the queue Node is not empty, it means that there is already a Node in the synchronization queue, Directly try to execute CAS operation to quickly append the currently encapsulated Node to the end of the queue. If CAS fails, execute enq(node) method. Of course, if the tail Node is empty during judgment, it means that there is no Node in the synchronization queue, and the enq(node) method will also be executed directly. We continue to analyze the implementation of enq(node) function:
private Node enq(final Node node) { // Dead cycle for (;;) { Node t = tail; // If the queue is null, there is no header node if (t == null) { // Must initialize // Create and use CAS to set up header nodes if (compareAndSetHead(new Node())) tail = head; } else { // Add a new node at the end of the queue node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; return t; } } } }
In this method, for(;;) is used to start an endless loop and perform CAS operations within it (to avoid concurrency problems). Two things have been done: first, if the synchronization queue in AQS has not been initialized, create a new Node, and then call the compareAndSetHead() method to set the Node as the head Node; Second, if the synchronization queue already exists, the delivered nodes will be quickly added to the tail of the queue. Note that in these two steps, multiple threads may operate together at the same time. If one thread modifies the head and tail successfully, other threads will continue to cycle until the modification is successful. Here, CAS atomic operation is used to set the head Node and replace the tail Node, which can ensure thread safety. At the same time, it can be seen from here that the head Node itself does not store any data. It is just a new Node. It is only a leading Node, and the tail always points to the tail Node (provided that the queue is not null).
For example, six threads T1, T2, T3, T4, T5 and T6 join the queue at the same time, but only T2 joins the queue successfully, and the other five threads (T1, T3, T4, T5 and T6) will continue to cycle until the queue succeeds.
The nodes added to the synchronization queue will enter a spin process. When the timing waiting conditions are met, each node starts to obtain the synchronization state, then exits from the synchronization queue and ends the spin, returning to the previous acquire() method. The spin process is executed in the acquirequeueueueueueueued (addwaiter (node. Exclusive), ARG)) method. The code is as follows:
final boolean acquireQueued(final Node node, int arg) { boolean failed = true; try { boolean interrupted = false; // Blocking pending ID // A dead cycle spin for (;;) { // Get precursor node final Node p = node.predecessor(); // If p is the head node, it attempts to obtain synchronization status if (p == head && tryAcquire(arg)) { // Set node as the head node setHead(node); // Set the original head node to null for GC convenience p.next = null; // help GC failed = false; return interrupted; } // If the precursor node is not a head, judge whether to block the suspended thread if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) interrupted = true; } } finally { if (failed) // If the synchronization status is not obtained successfully in the end, end the request of the thread cancelAcquire(node); } }
During the execution of the dead loop (spin), the thread in the current node starts to try to obtain the synchronization state when the precursor node of the node is the head node (in accordance with the FIFO principle). The head node is the thread node that currently holds the synchronization status identifier. Only when the head node releases the synchronization status and wakes up the successor node can the successor node obtain the synchronization status. Therefore, this is why it starts to try to obtain the synchronization status only when the predecessor node of the node is the head node. Other times will be suspended. If the current node has started trying to obtain synchronization status, after entering if, setHead() method will be executed to set the current thread as the head node, as follows:
// Set the delivered node as the head node of the synchronization queue private void setHead(Node node) { head = node; // Clear the data information stored in the current node node.thread = null; node.prev = null; }
After the node node is set as the head node, the thread and precursor node information stored in the current node will be cleared. Because the current thread has successfully obtained the lock resource, there is no need to store thread information. At the same time, because the current node has become the head node and does not exist in the precursor node, the information will also be cleared. The head node only keeps the information pointing to the successor node, which is convenient for the current node to wake up the successor thread when it releases the lock resource. The above is the logic that will be executed when the precursor node of the node is the head node. If the precursor node of the node is not the head, it will execute if (shouldparkafterfailedacquire (P, node) & & parkandcheckinterrupt()) interrupted = true; The logic code is as follows:
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { // Gets the waiting state of the current node int ws = pred.waitStatus; // Returns true if it is in the SIGNAL state if (ws == Node.SIGNAL) return true; // If the waiting state of the current node is greater than 0, it indicates the end state, // Traverse the precursor node until a node without an end state is found if (ws > 0) { do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else { // If the waiting state of the current node is less than 0 and not SIGNAL, // Set it to the SIGNAL state, which means that the thread of the node is waiting to wake up compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; } private final boolean parkAndCheckInterrupt() { // Suspend the current thread LockSupport.park(this); // Get the thread interrupt status. interrupted() is to judge the current interrupt status, // Instead of interrupting the thread, the result may be true or false and returned return Thread.interrupted(); } LockSupport → park()method: public static void park(Object blocker) { Thread t = Thread.currentThread(); // Sets the monitor blocker for the current thread setBlocker(t, blocker); // The blocking mechanism from native method to JVM level is called to block the current thread UNSAFE.park(false, 0L); // After blocking, set the blocker to null setBlocker(t, null); }
The shouldParkAfterFailedAcquire() method is used to judge whether the precursor node of the node is in the wake-up waiting state (SIGNAL state). If so, it returns true. If the waitStatus of the precursor node is greater than 0 (only the CANCELLED end status = 1 > 0), it means that the precursor node is no longer used. It should be removed from the synchronization queue and run a do/while loop to traverse all the precursor nodes until a node in a non CANCELLED state is found. However, if the waitStatus of the predecessor node of the current node is not in the CANCELLED end state or the SIGNAL waiting wake-up state, that is, it means that the node has just transferred from the condition waiting queue to the synchronization queue, and the node state is the condition state, so it needs to be converted to the SIGNAL state, then it will be converted to the SIGNAL state and wait to be awakened.
When the shouldParkAfterFailedAcquire() method returns true, it means that the precursor node of the current node is in the wake-up WAITING state (SIGNAL), but the precursor node is not the head node, then use parkAndCheckInterrupt() to suspend the thread, and then change the node state to the WAITING state. When the node state is in the WAITING state, you need to wait for unpark() Operation to wake it up. The lock() operation is completed inside the ReetrantLock indirectly through the synchronization queue of the FIFO of the AQS. We can summarize the overall flow chart below:
4.1.2 principles of some other methods for obtaining lock resources in ReetrantLock
We have talked about the specific implementation principle of retrantlock. Lock() method in detail before. In the development process, we sometimes use interruptible acquisition methods to lock, such as calling lockinterruptible() and tryLock() of retrantlock. Finally, the bottom layer of these methods will indirectly call doacquireinterinterruptible() method. As follows:
private void doAcquireInterruptibly(int arg) throws InterruptedException { // Encapsulate a Node to attempt a queue operation final Node node = addWaiter(Node.EXCLUSIVE); boolean failed = true; try { for (;;) { // Gets the precursor node of the current node final Node p = node.predecessor(); // If the precursor node is a head node, an attempt is made to obtain the lock resource / synchronization status ID if (p == head && tryAcquire(arg)) { // After successful acquisition, set the current node as the head node setHead(node); p.next = null; // help GC failed = false; return; } if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) // Throw an exception directly to interrupt the synchronization status request of the thread throw new InterruptedException(); } } finally { if (failed) cancelAcquire(node); } }
It differs from the lock() method in that:
/** ---------------lock()--------------- */ // If the precursor node is not a head, judge whether to block the suspended thread if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) interrupted = true; /** --------lockInterruptibly(),tryLock()------- */ if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) // Throw an exception directly to interrupt the synchronization status request of the thread throw new InterruptedException();
In the interruptible way of obtaining lock resources, when the interrupt operation of the thread is detected, an exception is thrown directly, so as to interrupt the synchronization status request of the thread and remove the synchronization queue.
4.1.3 analysis of unlock() release lock principle in ReetrantLock
Generally speaking, when we use explicit locks such as ReetrantLock, we also need to release the lock resources manually after obtaining the lock. In ReetrantLock, after you call lock() to obtain lock resources, we also need to manually call unlock () to release the lock. The code for unlock() to release the lock is as follows:
// ReetrantLock → unlock() method public void unlock() { sync.release(1); } // AQS → release() method public final boolean release(int arg) { // Attempt to release the lock if (tryRelease(arg)) { // Get the header node for judgment Node h = head; if (h != null && h.waitStatus != 0) // Wake up the thread of the successor node unparkSuccessor(h); return true; } return false; } // ReentrantLock → Sync → tryRelease(int releases) method protected final boolean tryRelease(int releases) { // Modify the synchronization status: acquire lock is + and release lock is- int c = getState() - releases; // If the thread currently releasing the lock is not the thread holding the lock, an exception is thrown if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; // Judge whether the status is 0. If yes, it indicates that the synchronization status has been released if (c == 0) { free = true; // Set Owner to null setExclusiveOwnerThread(null); } // Set update synchronization status setState(c); return free; }
The logic of releasing a lock is much simpler than that of obtaining a lock. The unlock() method finally calls tryRelease(int releases) to release the lock, and tryRelease(int releases) is the method implemented by ReetrantLock, because no specific implementation is provided in AQS, but the subclass implements the specific logic itself. After releasing the lock resource, the thread of the successor node will be awakened using unparksuccess (H). The code of unparksuccess (H) is as follows:
private void unparkSuccessor(Node node) { // Node is generally the node where the current thread is located to obtain the waiting state of the current thread int ws = node.waitStatus; if (ws < 0) // Zero the node state where the current thread is located. Failure is allowed compareAndSetWaitStatus(node, ws, 0); Node s = node.next; // Gets the successor node of the current node if (s == null || s.waitStatus > 0) { // If empty or closed s = null; for (Node t = tail; t != null && t != node; t = t.prev) // The node with waiting status < = 0 represents a valid node if (t.waitStatus <= 0) s = t; } if (s != null) LockSupport.unpark(s.thread); // Wake up the successor node thread }
In the unparkwinner (H) method, the unpark() method is used to wake up the threads in subsequent nodes that do not give up competing lock resources, that is, nodes s with waitstatus < = 0. When we analyzed the lock acquisition principle earlier, we analyzed a spin method acquirequeueueueued (), which we can now understand together. After the thread of node s is awakened, it will execute the code if (p==head & & tryacquire (ARG)) in the acquirequeueueueueuented() method to determine (even if p is not the head node, it will not be affected, because the shouldParkAfterFailedAcquire() method will be executed). After the node of the node where the thread currently holding the lock resource is released, s will go through the logic processing of the unparksuccess() method, S becomes the front thread in the AQS synchronization queue that does not give up the lock resource competition. Finally, after the logical processing of shouldParkAfterFailedAcquire() method, s node will also become the next node of the head node. So finally, in the spin method, when the second loop goes to the if (p==head & & tryacquire (ARG)) logic, the judgment formula of p==head will be established, and then s will set itself as the head node, indicating that it has obtained the lock resource. Finally, the execution of the whole acquire() method ends.
In a word, a FIFO synchronization queue is maintained in AQS. When a thread fails to obtain a lock by executing the ReetrantLock.lock() method, the thread will be encapsulated as a Node to join the synchronization queue and wait for the release of lock resources. During this period, the thread will continue to execute spin logic. When the predecessor Node of the Node where the thread is located is the queue head Node, the current thread will start to try to modify the synchronization status ID state (+ 1). If the modification is successful, it means that the lock resource is obtained successfully. Then set the Node where it is located as the queue head Node, indicating that it has held the lock resource. Then, when a thread calls ReetrantLock.unlock() to release the lock, it will eventually call the tryRelease(int releases) method in the Sync internal class to modify the synchronization status ID state (- 1) again. After success, it will wake up the thread in the subsequent Node of the Node where the current thread is located.
4.2 FairSync fair lock in ReetrantLock
Previously, we have analyzed the implementation process of unfair lock in ReetrantLock in detail. Next, let's explore the implementation principle of fair lock in ReetrantLock. But before that, we need to have an understanding of the concepts of fairness and unfairness. The so-called fair and unfair are distinguished based on the time sequence of thread arrival. Fair lock refers to a mode that fully follows the FIFO principle. This means that in terms of time sequence, in the fair lock mode, the thread that executes the lock acquisition logic first will hold the lock resource first. Similarly, unfair locks are the opposite. Let's take a look at the implementation of the tryAcquire(int acquires) method in the FairSync class of the fair lock.
// ReetrantLock → FairSync → tryAcquire(int acquires) protected final boolean tryAcquire(int acquires) { // Get current thread final Thread current = Thread.currentThread(); // Get synchronization status identification value int c = getState(); if (c == 0) { // If it is 0, it means that no thread currently holds lock resources // In the implementation of fair lock, first judge whether there are nodes in the synchronization queue if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }
The only difference between the implementation of the lock acquisition method tryAcquire(int acquires) in FairSync class and the lock acquisition method nonfairTryAcquire(int acquires) in NonFairSync class is that in the implementation of fair lock, hasQueuedPredecessors() will be called to determine whether there are nodes in the synchronization queue in AQS before trying to modify the state. If it exists, it means that a thread has submitted a request to obtain a lock before, and the current thread will be directly encapsulated as a Node and appended to the tail of the queue for waiting. In the tryAcquire(int acquires) implementation of non fair locks, no matter whether a Node already exists in the queue, it will first try to modify the synchronization status ID state to obtain the lock. When obtaining the lock fails, the current thread will be encapsulated as a Node node and added to the queue. However, in the actual development process, if we do not need to consider the execution sequence during business processing, we should give priority to the use of unfair locks, because often in the actual application process, the performance of unfair locks will greatly exceed that of fair locks!
4.3. How to choose between ReetrantLock and synchronized in the actual development process?
In the previous article:< Thoroughly understand the implementation principle of Synchronized keyword in Java Concurrent Programming >In, we talked in detail about the underlying implementation of synchronized implicit locks in Java. We also talked about that after JDK1.6, the JVM has optimized the synchronized keyword to a great extent. Then, in the actual development process, how can we choose between ReetrantLock and synchronized? Synchronized is relatively easier to use and has clearer semantics. At the same time, the JVM is automatically optimized for us. ReetrantLock is more flexible to use, and also provides diversified support, such as timeout acquisition lock, interruptible acquisition lock, multiple condition variables waiting for wake-up mechanism, etc. So when we need to use these functions, we can choose ReetrantLock. However, the specific adoption still needs to be determined according to the business needs.
For example, the traffic of a project is very large from 1:00 a.m. to 5:00 a.m., but the access frequency is relatively low in other times. What kind of lock is more suitable for this situation? The answer is ReetrantLock.
Why? Because we mentioned in the previous article on synchronized that the lock upgrade / expansion of synchronized is irreversible, and there is almost no lock degradation during the running of Java programs. In this business scenario, during the period of rapid traffic increase, synchronized may directly expand into a heavyweight lock. Once synchronized is upgraded to a heavyweight lock, every lock acquired after the lock is a heavyweight lock, which will greatly affect the program performance.
4.4. Implementation summary of ReetrantLock
- Basic components:
- Synchronization status ID: displays the possession status of lock resources
- Synchronization queue: stores threads that failed to acquire locks
- Waiting queue: used to realize multi conditional wake-up
- Node node: each node of the queue, thread encapsulation
- Basic action:
- cas modify synchronization status ID
- Failed to acquire lock, join synchronization queue blocking
- Wakes up the first node thread of the synchronization queue when the lock is released
- Locking action:
- Call tryAcquire() to modify the ID state. If successful, return true. If failed, join the queue and wait
- After joining the queue, judge whether the node is in signal state. If yes, it will directly block and suspend the current thread
- If not, judge whether it is in cancel status. If yes, traverse forward and delete all cancel status nodes in the queue
- If the node is in 0 or propagate state, change it to signal state
- After the blocking is awakened, if it is head, the lock will be obtained. If it succeeds, it will return true. If it fails, the blocking will continue
- Unlocking action:
- Call tryRelease() to release the lock and modify the ID state. If it succeeds, it returns true and if it fails, it returns false
- Wake up the thread node blocked by the synchronization queue after successfully releasing the lock
- The awakened node will automatically replace the current node as the head node
5, The magical Condition implementation principle of multi Condition wait wake-up mechanism
In Java Concurrent Programming, each object in the Java heap will be "associated" with a monitor object at the time of "birth", and each Java object will have a set of monitor methods: wait(), notify(), and notifyAll(). Through these methods, we can realize the cooperation and communication between Java multithreads, that is, the waiting wake-up mechanism, such as the common producer consumer model. However, in the process of using this set of monitor methods of Java objects, we need to cooperate with the synchronized keyword, because in fact, the waiting wake-up mechanism of Java objects is implemented based on the monitor monitor object. Compared with the wake-up waiting mechanism of synchronized keyword, Condition is more flexible, because synchronized notify() can only wake up a thread waiting for lock randomly, while Condition can wake up a thread waiting for lock more finely. Different from the synchronized wait wake-up mechanism, in the monitor monitor model, an object has a synchronization queue and a wait queue, while a lock object in AQS has a synchronization queue and multiple wait queues. The implementation principle of object monitor lock is as follows:
5.1. Quick understanding and hands-on Condition Practice
Condition is an interface class implemented by the ConditionObject class inside AQS. The methods defined in condition are as follows:
public interface Condition { /** * Calling the current method will keep the current thread waiting until it is notified or interrupted * When another thread calls the singal() or singalAll() method, the current thread will wake up * When other threads call the interrupt() method to interrupt the current thread waiting state * await()It is equivalent to the wait() method in the synchronized wait wake mechanism */ void await() throws InterruptedException; /** * The function is the same as await(), but this method does not respond to thread interrupt operations */ void awaitUninterruptibly(); /** * The function is the same as await(), but this method supports timeout interrupt (unit: nanosecond) * When the thread waiting time exceeds nanosTimeout, the waiting state is interrupted */ long awaitNanos(long nanosTimeout) throws InterruptedException; /** * The function is the same as awaitNanos(long nanosTimeout), but this method can declare the time unit */ boolean await(long time, TimeUnit unit) throws InterruptedException; /** * The function is the same as await(). It returns true when awakened within deadline time, and false in other cases */ boolean awaitUntil(Date deadline) throws InterruptedException; /** * When a thread calls this method, it wakes up a thread node in the waiting queue * The thread is moved from the waiting queue to the synchronization queue to block the waiting lock resource * signal()This is equivalent to the notify() method in the synchronized wait wake mechanism */ void signal(); /** * The function is the same as signal(), but the function of this method is to wake up all thread nodes in the waiting queue * signalAll()This is equivalent to the notifyAll() method in the synchronized wake-up waiting mechanism */ void signalAll(); }
The above methods defined in the Condition interface can be divided into two categories. One is the await method of thread hang / wait class, and the other is the signal method of thread wake-up class. Next, we use Condition to implement a classic small case of consumer / producer. Let's briefly understand the use of Condition:
public class Bamboo { private int bambooCount = 0; private boolean flag = false; Lock lock = new ReentrantLock(); Condition producerCondition = lock.newCondition(); Condition consumerCondition = lock.newCondition(); public void producerBamboo() { lock.lock(); // Get lock resource try { while (flag) { // If there is bamboo try { producerCondition.await(); // Suspend thread producing bamboo } catch (InterruptedException e) { e.printStackTrace(); } } bambooCount++; // Bamboo quantity + 1 System.out.println(Thread.currentThread().getName() + "....Bamboo production, current bamboo quantity:" + bambooCount); flag = true; // Change the status to true consumerCondition.signal(); // After the production of bamboo, wake up the thread of bamboo consumption } finally { lock.unlock(); // Release lock resource } } public void consumerBamboo() { lock.lock(); // Get lock resource try { while (!flag) { // Without bamboo try { consumerCondition.await(); // Suspend thread consuming bamboo } catch (InterruptedException e) { e.printStackTrace(); } } bambooCount--; // Bamboo quantity - 1 System.out.println(Thread.currentThread().getName() + "....Bamboo consumption, current bamboo quantity:" + bambooCount); flag = false; // Change the status to false producerCondition.signal(); // After bamboo consumption, wake up the thread of bamboo production } finally { lock.unlock(); // Release lock resource } } } /**------------------Split line--------------------**/ // Test class public class ConditionDemo { public static void main(String[] args){ Bamboo b = new Bamboo(); Producer producer = new Producer(b); Consumer consumer = new Consumer(b); // Producer thread group Thread t1 = new Thread(producer,"producer-t1"); Thread t2 = new Thread(producer,"producer-t2"); Thread t3 = new Thread(producer,"producer-t3"); // Consumer thread group Thread t4 = new Thread(consumer,"consumer-t4"); Thread t5 = new Thread(consumer,"consumer-t5"); Thread t6 = new Thread(consumer,"consumer-t6"); t1.start(); t2.start(); t3.start(); t4.start(); t5.start(); t6.start(); } } // producer class Producer implements Runnable{ private Bamboo bamboo; public Producer(Bamboo bamboo) { this.bamboo = bamboo; } @Override public void run() { for (;;){ bamboo.producerBamboo(); } } } // producer class Consumer implements Runnable{ private Bamboo bamboo; public Consumer(Bamboo bamboo) { this.bamboo = bamboo; } @Override public void run() { for (;;){ bamboo.consumerBamboo(); } } }
The above code uses a case of Bamboo production / consumption, and simply uses the Condition. In this case, there are six threads. T1, T2 and T3 are the producer thread group, T4, T5 and T6 are the consumer thread group. The six threads are executed at the same time. It is necessary to ensure that the production thread group produces Bamboo first, and then the consumer thread group can consume Bamboo. Otherwise, the threads of the consumer thread group can only wait until the producer thread group produces Bamboo, and there can be no repeated consumption. Two methods are defined in the Bamboo class: producerBamboo() and consumerBamboo() for producing and consuming Bamboo. At the same time, a global ReetrantLock lock is defined to ensure that two groups of threads do not have thread safety problems during simultaneous execution. Because the sequence of production / consumption needs to be guaranteed, two waiting conditions are created based on the lock object: producerCondition and consumerCondition. The former controls the production thread group to wait when the number of Bamboo is not zero, and the latter controls the consumer thread group. At the same time, a flag flag is defined to show the surplus of Bamboo. If it is false, it means there is no Bamboo. It is necessary to produce Bamboo first and wake up the consumer thread after production. If it is true, it is the opposite.
In the above case, compared with the synchronized wait / wake-up mechanism, the advantage is that two wait conditions producerCondition and consumerCondition can be created. Because there are two wait queues, the producer thread group and consumer thread group can be accurately controlled. If synchronized wait()/notify() is used to implement the above case, the consumer thread may wake up when the consumer thread wakes up after consumption, because there is only one waiting queue in the Monitor object. If synchronized wants to avoid this problem, it can only use notifyAll() Wake up all threads in the waiting queue. However, because it is necessary to wake up all threads in the waiting queue, the performance will be much slower than Condition.
5.2. Condition implementation principle analysis
As mentioned earlier, condition is only an interface. The specific implementation is the ConditionObject class inside AQS. When analyzing AQS at the beginning of this paper, we also mentioned that there are two kinds of queues inside AQS: synchronous queue and waiting queue. The waiting queue is based on condition. The Node types in the synchronization queue and waiting queue are composed of nodes in AQS, but the waitStatus of the Node in the waiting queue is in condition status. There are two nodes in the ConditionObject class: firstWaiter and lastWaiter, which are used to store the queue head Node and queue tail Node in the waiting queue. Each Node uses Node.nextWaiter to store the reference of the next Node, so the waiting queue is a one-way queue. Therefore, the overall structure of AQS synchronizer is as follows:
As shown in the figure above, different from the synchronization queue, each Condition corresponds to a waiting queue. If multiple conditions are created on a ReetrantLock lock, there will be multiple waiting queues. At the same time, although the nodes in the synchronization queue and the waiting queue are composed of Node classes, the Node nodes in the synchronization queue are two-way linked list types referenced by pred predecessor nodes and next successor nodes, while each Node in the waiting queue only uses nextWaiter to store the one-way linked list types referenced by successor nodes. However, consistent with the synchronization queue, the waiting queue is also a FIFO queue. Each Node of the queue will store the waiting thread information on the Condition object. When a thread calls the method of await suspend class, the thread will first release the lock, build a Node node to encapsulate the relevant information of the thread and add it to the waiting queue. It will not be removed from the queue until it is awakened, interrupted or timed out. Let's explore the principle of Condition wait / wake mechanism from the perspective of source code:
public final void await() throws InterruptedException { // Determine whether the thread has an interrupt signal if (Thread.interrupted()) // In response to an interrupt, an exception is thrown directly to interrupt the execution of the thread throw new InterruptedException(); // Encapsulate the thread information, build a new node, join the waiting queue and return Node node = addConditionWaiter(); // Release the lock resources held by the current thread, and set them to 0 no matter how many times the current thread re enters int savedState = fullyRelease(node); int interruptMode = 0; // Judge whether the node is in the syncqueue, that is, whether it is awakened while (!isOnSyncQueue(node)) { // If wakeup is not required, suspend the current thread at the JVM level LockSupport.park(this); // Judge whether it is awakened by interruption. If so, exit the cycle if ((interruptMode = checkInterruptWhileWaiting(node)) != 0) break; } // After being awakened, perform spin operation to try to obtain the lock, and judge whether the thread is interrupted if (acquireQueued(node, savedState) && interruptMode != THROW_IE) interruptMode = REINTERRUPT; // Clean up after cancellation if (node.nextWaiter != null) // Clean up nodes in the waiting queue that are not in the CONDITION state unlinkCancelledWaiters(); if (interruptMode != 0) reportInterruptAfterWait(interruptMode); } // Queue method for constructing node encapsulation thread information private Node addConditionWaiter() { Node t = lastWaiter; // Judge whether the node state is the end state. If so, remove it if (t != null && t.waitStatus != Node.CONDITION) { unlinkCancelledWaiters(); t = lastWaiter; } // Build a new node to encapsulate the relevant information of the current thread, and the node state is CONDITION waiting state Node node = new Node(Thread.currentThread(), Node.CONDITION); // Queue nodes if (t == null) firstWaiter = node; else t.nextWaiter = node; lastWaiter = node; return node; }
From the above code observation, it is not difficult to find that await() mainly does four things:
- 1, Call the addConditionWaiter() method to build a new node, encapsulate the thread information and add it to the waiting queue
- 2, Call fullyRelease(node) to release the lock resource (set state to 0 no matter how many times the thread holding the lock re enters at this time), and wake up the threads of subsequent nodes in the synchronization queue at the same time.
- 3, Call isOnSyncQueue(node) to determine whether the node exists in the synchronization queue. This is a spin operation. If the current node does not exist in the synchronization queue, suspend the current thread directly at the JVM level
- 4, After the current node thread is awakened, that is, when the node is transferred from the waiting queue to the synchronization queue, the acquirequeueueueueueueueueued (node, savedstate) method is called to perform spin operation to try to obtain the lock resource again
At this point, the whole await() method ends, and the whole thread ends from calling the await() method → building nodes into columns → releasing lock resources and waking up subsequent nodes in the synchronization queue → suspending threads at the JVM level → waking up competing lock resources. The principles of other await() wait class methods are similar, so we won't repeat them. Next, let's take a look at the singal() wake-up method:
public final void signal() { // Judge whether the current thread holds exclusive lock resources. If not, an exception will be thrown directly if (!isHeldExclusively()) throw new IllegalMonitorStateException(); Node first = firstWaiter; // Wake up the thread waiting for the first node of the queue if (first != null) doSignal(first); }
Here, the singal() wake-up method does two things:
- 1, Judge whether the current thread holds the exclusive lock resource. If the thread calling the wake-up method does not hold the lock resource, an exception will be thrown directly (there is no waiting queue in the shared mode, so Condition cannot be used)
- 2, Wake up the thread of the first node in the waiting queue, that is, call the doSignal(first) method
Let's take a look at the implementation of doSignal(first) method:
private void doSignal(Node first) { do { // Remove the first node in the waiting queue if nextWaiter is empty // It means that there are no other nodes in the waiting queue, so the tail node is also set to empty if ( (firstWaiter = first.nextWaiter) == null) lastWaiter = null; first.nextWaiter = null; // If the node notified of the last wake-up does not enter the synchronization queue (it may be interrupted), // If there are other nodes in the waiting queue, the thread of subsequent nodes will continue to wake up circularly } while (!transferForSignal(first) && (first = firstWaiter) != null); } // transferForSignal() method final boolean transferForSignal(Node node) { /* * Try to modify the waitStatus of the awakened node to 0, that is, the initialization status * If the setting fails, it means that the current node is not in the CONDITION waiting state, * If the status is ended, return false and return doSignal() to continue waking up the successor node * Why does setting failure mean that the node is not in the CONDITION waiting state? * Because the thread that can execute here must hold the exclusive lock resource, * The cas mechanism is used here to modify waitStatus. There is only one reason for failure: * The expected value waitStatus is not equal to CONDITION */ if (!compareAndSetWaitStatus(node, Node.CONDITION, 0)) return false; // Quickly append to the end of the synchronization queue and return to the precursor node p Node p = enq(node); // Judge whether the state of the precursor node is the end state or when setting the state of the precursor node to SIGNAL fails, // Wake up the thread in the notified node int ws = p.waitStatus; if (ws > 0 || !compareAndSetWaitStatus(p, ws, Node.SIGNAL)) // Wake up the thread in the node node LockSupport.unpark(node.thread); return true; }
In the above code, I can find through my comments that doSignal() also does only three things:
- 1, Remove the first awakened node from the waiting queue, and then maintain the node references of firstWaiter and lastWaiter in the waiting queue
- 2, Append the nodes removed from the waiting queue to the end of the synchronization queue. If the synchronization queue fails to be appended or there are other nodes in the waiting queue, continue to cycle the threads of other nodes
- 3, After successfully joining the synchronization queue, if the state of the precursor node is already in the end state or failed to set the state of the precursor node to SIGNAL, directly wake up the threads in the node through LockSupport.unpark()
At this point, the logic of the signal () method ends, but it should be noted that when we understand the wait / wake principle of Condition, we need to combine the await()/signal() method. After the signal () logic is completed, the awakened thread will exit from the spin of the await() method. Because the node where the current thread is located has been moved into the synchronization queue, the while (!isOnSyncQueue(node)) Condition is not true, and the loop will naturally terminate. Then the awakened thread will call acquirequeueueued() to try to obtain lock resources.
6, The difference between Condition interface and Monitor object wait / wake mechanism
Finally, let's briefly compare the difference between the conditional multi Condition wait / wake-up mechanism of ReetrantLock and the Synchronized Monitor object lock wait / wake-up mechanism:
Comparison item | Monitor | Condition |
---|---|---|
Preconditions | Object lock required | You need to hold an exclusive lock and create a Condition object |
Call mode | Object.wait() | condition.await class and method can |
Number of queues | One | Multiple |
Release lock resource while waiting | support | support |
Thread interrupt | I won't support it | support |
Timeout interrupt | I won't support it | support |
Timeout wait | support | support |
Precise wake-up thread | I won't support it | support |
Wake up all threads | support | support |
7, References and books
- Deep understanding of JVM virtual machines
- The beauty of Java Concurrent Programming
- Java high concurrency programming
- Core technology of 100 million traffic website architecture
- Java Concurrent Programming Practice
So far, the analysis of the implementation principle of AQS exclusive mode has come to an end. In the next article, we will further explore the specific implementation of sharing mode~