1. Concept
synchronized ensures the atomicity, visibility, and order of the operations of the decorated method or code block. synchronized can re-enter lock, non interrupt, suitable for thread competition is not fierce.
- The method or code block described by the Synchronized keyword can only be accessed by one thread at a time in a multithreaded environment, because before the execution of the thread holding the current lock is completed, other threads must queue if they want to call related methods, and release the lock to other threads until the execution of the current thread is completed, so atomicity is guaranteed.
- The data of the method or code block described by the Synchronized keyword is Synchronized in the multithreaded environment, that is, when the lock is acquired, the memory is copied to its own cache for operation, and before the lock is released, the data in the cache is copied to the shared memory, so the visibility is guaranteed.
- The method or code block described by the Synchronized keyword can only be accessed by one thread at a time in a multithreaded environment. The code is orderly and does not start the JMM instruction rearrangement mechanism, so the order is guaranteed.
When using the synchronized keyword, you need to pay attention to the following points:
- A lock can only be acquired by one thread at the same time, and the thread without the lock can only wait;
- Each instance has its own lock (this). Different instances do not affect each other. Exception: when the lock object is *. class and the synchronized decorated static method, all objects share the same lock
- The synchronized decorated method will release the lock no matter whether the method completes normal execution or throws an exception
2. Object lock
Including method lock (default lock object is this, current instance object) and synchronization code block lock (specify lock object by yourself)
Code block form:
Manually specify the lock object, which can be this or a custom lock
- Example 1
public class SynchronizedObjectLock implements Runnable { static SynchronizedObjectLock instence = new SynchronizedObjectLock(); @Override public void run() { // Synchronization code block form -- lock is this, the lock used by two threads is the same, thread 1 must wait until thread 0 releases the lock before executing synchronized (this) { System.out.println("I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "end"); } } public static void main(String[] args) { Thread t1 = new Thread(instence); Thread t2 = new Thread(instence); t1.start(); t2.start(); } }
Output results:
I am Thread-0 End of Thread-0 I'm Thread-1 End of Thread-1- Example 2
public class SynchronizedObjectLock implements Runnable { static SynchronizedObjectLock instence = new SynchronizedObjectLock(); // Create 2 locks Object block1 = new Object(); Object block2 = new Object(); @Override public void run() { // This code block uses the first lock. When it is released, the subsequent code block can execute immediately because it uses the second lock synchronized (block1) { System.out.println("block1 lock,I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("block1 lock,"+Thread.currentThread().getName() + "end"); } synchronized (block2) { System.out.println("block2 lock,I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("block2 lock,"+Thread.currentThread().getName() + "end"); } } public static void main(String[] args) { Thread t1 = new Thread(instence); Thread t2 = new Thread(instence); t1.start(); t2.start(); } }
Output results:
block1 lock, I'm Thread-0 block1 lock, Thread-0 end Block 2 lock, I'm Thread-0 / / it can be seen that when the first thread finishes executing the first synchronization block, the second synchronization block can be executed immediately, because the locks they use are not the same block1 lock, I'm Thread-1 block2 lock, end of Thread-0 block1 lock, Thread-1 end block2 lock, I'm Thread-1 block2 lock, end of Thread-1Method lock form:
synchronized modifies the normal method, and the lock object defaults to this
public class SynchronizedObjectLock implements Runnable { static SynchronizedObjectLock instence = new SynchronizedObjectLock(); @Override public void run() { method(); } public synchronized void method() { System.out.println("I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "end"); } public static void main(String[] args) { Thread t1 = new Thread(instence); Thread t2 = new Thread(instence); t1.start(); t2.start(); } }
Output results:
I am Thread-0 End of Thread-0 I'm Thread-1 End of Thread-13. Class lock
To synchronize a static method or to specify that a lock object is a Class object
synchronize to modify static methods
- Example 1
public class SynchronizedObjectLock implements Runnable { static SynchronizedObjectLock instence1 = new SynchronizedObjectLock(); static SynchronizedObjectLock instence2 = new SynchronizedObjectLock(); @Override public void run() { method(); } // synchronized is used in common methods. The default lock is this. The current instance public synchronized void method() { System.out.println("I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "end"); } public static void main(String[] args) { // this corresponding to t1 and t2 are two different instances, so the code will not be serialized Thread t1 = new Thread(instence1); Thread t2 = new Thread(instence2); t1.start(); t2.start(); } }
Output results:
I am Thread-0 I'm Thread-1 End of Thread-1 End of Thread-0- Example 2
public class SynchronizedObjectLock implements Runnable { static SynchronizedObjectLock instence1 = new SynchronizedObjectLock(); static SynchronizedObjectLock instence2 = new SynchronizedObjectLock(); @Override public void run() { method(); } // synchronized is used for static methods. The default lock is the current Class. Therefore, no matter which thread accesses it, only one lock is needed public static synchronized void method() { System.out.println("I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "end"); } public static void main(String[] args) { Thread t1 = new Thread(instence1); Thread t2 = new Thread(instence2); t1.start(); t2.start(); } }
Output results:
I am Thread-0 End of Thread-0 I'm Thread-1 End of Thread-1synchronized specifies that the lock object is a Class object
public class SynchronizedObjectLock implements Runnable { static SynchronizedObjectLock instence1 = new SynchronizedObjectLock(); static SynchronizedObjectLock instence2 = new SynchronizedObjectLock(); @Override public void run() { // All threads need the same lock synchronized(SynchronizedObjectLock.class){ System.out.println("I'm a thread" + Thread.currentThread().getName()); try { Thread.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "end"); } } public static void main(String[] args) { Thread t1 = new Thread(instence1); Thread t2 = new Thread(instence2); t1.start(); t2.start(); } }
Output results:
I am Thread-0 End of Thread-0 I'm Thread-1 End of Thread-14. Principle of locking and releasing lock
Go deep into the JVM to see the bytecode, and create the following code:
public class SynchronizedDemo2 { Object object = new Object(); public void method1() { synchronized (object) { } } }
Using javac command to compile and generate. class file
>javac SynchronizedDemo2.java
Using the javap command to decompile and view the information of the. class file
>javap -verbose SynchronizedDemo2.class
Get the following information:
Just pay attention to the monitorenter and monitorexit in the red box.
The monitorenter and Monitorexit instructions will cause the object to execute, increasing or decreasing its lock counter by 1. Each object is associated with only one monitor (lock) at the same time, and a monitor can only be obtained by one thread at the same time. When an object attempts to obtain the ownership of the monitor lock associated with this object, the monitorcenter instruction will occur in one of the following three situations:
- The monitor counter is 0, which means that it has not been obtained yet. Then this thread will obtain it immediately and then add the lock counter + 1. Once + 1 is obtained, other threads need to wait for it if they want to obtain it again
- If the monitor has obtained the ownership of the lock and re entered the lock, the lock counter will accumulate and become 2. With the number of re entrances, it will always accumulate. This is the re entrant mechanism.
- This lock has been acquired by another thread, waiting for the lock to be released
monitorexit instruction: to release the ownership of the monitor, the release process is very simple. That is to say, the counter of the monitor is reduced by 1. If the counter is not 0 after the reduction, it means that it is just re entered. The current thread still holds the ownership of the lock. If the counter becomes 0, it means that the current thread no longer owns the ownership of the monitor, that is, release the lock.
The following figure shows the relationship between objects, object monitors, synchronization queues, and execution thread status:
It can be seen from the figure that for any thread's access to the Object, the first step is to obtain the monitor of the Object. If the acquisition fails, the thread will enter the synchronization state, and the thread state will become BLOCKED. When the monitor owner of the Object is released, the thread in the synchronization queue will have the opportunity to acquire the monitor again.
5. Principle of ensuring visibility: memory model and happens before rule
Synchronized happens before rule, that is, monitor lock rule: to unlock the same monitor, happens before is to lock the monitor. Continue with the code:
public class MonitorDemo { private int a = 0; public synchronized void writer() { // 1 a++; // 2 } // 3 public synchronized void reader() { // 4 int i = a; // 5 } // 6 }
The happens before relationship of this code is shown in the figure:
In the figure, the two nodes connected by each arrow represent the relationships between happens before. The black ones are derived from the sequence rules of the program, and the red ones are derived from the lock rules of the monitor: thread A releases the lock, thread B adds the lock, and the blue ones are inferred from the sequence rules of the program and the lock rules of the monitor, and the next one is derived from the transitivity rules The relations of happens before are derived in step. Now let's focus on 2 happens before 5. What can we get from this relationship?
According to one of the definitions of happens before: if a happens before B, the execution result of a is visible to B, and the execution order of a is prior to B. Thread a first adds one to the shared variable A. according to the 2 happens before 5 relationship, it can be seen that the execution result of thread a is visible to thread B, that is, the value of a read by thread B is 1.
6. Optimization of lock in JVM
In simple terms, in the JVM, the monitorenter and monitorexit bytecode depend on Mutex Lock of the underlying operating system, but because mutex is used Lock needs to suspend the current thread and switch from user state to kernel state for execution, which is very expensive; however, in most cases, the synchronization method is to run in a single thread environment (no lock competition environment). If Mutex Lock is called every time, the performance of the program will be seriously affected. However, in jdk1.6, a lot of optimizations are introduced to the implementation of locks, such as lock coarsening, lock elimination, lightweight locking, biased locking, adaptive spining and other technologies to reduce the cost of lock operations.
- Lock coarsening: that is to reduce unnecessary lock and lock operations, and expand multiple consecutive locks into a larger range of locks.
- Lock elimination: through the run-time JIT compiler's escape analysis, we can eliminate the lock protection of some data that is not shared by other threads outside the current synchronization block. Through the escape analysis, we can also allocate the object space on the thread's local Stack (at the same time, we can reduce the garbage collection overhead on the Heap).
- Lightweight lock Locking): the implementation of this lock is based on the assumption that most of the synchronization code in our program is in a lock free state (i.e. a single thread execution environment) in the real situation. In the case of lock free competition, it can completely avoid calling the heavyweight mutually exclusive lock at the operating system level. Instead, we only need to rely on one in the monitorenter and monitorexit CAS atomic instructions can complete the acquisition and release of locks. In the case of lock contention, the thread that fails to execute CAS instruction will call the operating system mutex lock to block state, and wake up when the lock is released (the specific processing steps are discussed in detail below).
- Biased locking: to avoid unnecessary execution of CAS atomic instructions during lock acquisition without lock competition, because CAS atomic instructions have a small cost compared with heavyweight locks, but still have a very considerable local delay.
- Adaptive spinning: when a thread fails to perform CAS operation in the process of acquiring lightweight lock, it enters the mutex associated with monitor Semaphore) will enter into the busy waiting (Spinning) and try again. If it still fails after a certain number of attempts, the semaphore associated with the monitor (i.e. mutex lock) will be called to enter the blocking state.
Comparison of advantages and disadvantages of locks
lock advantage shortcoming Use scenario Biased lock CAS operation is not required for locking and unlocking, and there is no additional performance consumption. There is only a nanosecond gap compared with the implementation of asynchronous methods If there is lock competition between threads, it will cause additional consumption of lock revocation It is applicable to the scenario where only one thread accesses the synchronization fast Lightweight Locking Competitive threads do not block, improving response speed If a thread becomes a thread that never gets lock competition, using spin will consume CPU performance Fast response time, fast synchronization and very fast execution Heavyweight lock Thread contention is not applicable to spin and does not consume CPU Thread blocking, slow response time, in multithreading, frequent access to release locks, will bring huge performance consumption Pursue throughput, fast synchronization and long execution speed 2, ReentrantLock1. Concept
ReentrantLock is a re entrant and exclusive lock. It has the same basic behavior and semantics as using synchronized monitor lock. However, compared with synchronized keyword, ReentrantLock is more flexible and powerful, which adds advanced functions such as polling, timeout, interrupt, etc. ReentrantLock, as the name suggests, is a recursive and non blocking synchronization mechanism that supports reentrant locks. In addition, the lock also supports fair and unfair choice when acquiring the lock.
2. Source code analysis
ReentrantLock implements the lock interface, in which lock and unlock operations are defined, and there is also a newCondition method to generate a condition.
public class ReentrantLock implements Lock, java.io.Serializable
The sync of ReentrantLock class is very important. Most operations on ReentrantLock class are directly converted into operations on sync and AbstractQueuedSynchronizer classes.
- Synchronization queue
private final Sync sync;
- ReentrantLock() constructor
The default is to adopt an unfair policy to acquire locks
public ReentrantLock() { // Default unfair strategy sync = new NonfairSync(); }
- ReentrantLock(boolean) constructor
You can pass parameters to determine whether to adopt fair strategy or unfair strategy. If the parameter is true, it indicates fair strategy. Otherwise, it adopts unfair strategy:
public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); }
Inner class
ReentrantLock has three inner classes in total, and the three inner classes are closely related. Let's first look at the relationship between the three classes.
There are three classes in ReentrantLock: Sync, NonfairSync and FairSync. The NonfairSync and FairSync classes inherit from the Sync class and the Sync class from the AbstractQueuedSynchronizer abstract class. Let's analyze one by one.
- Sync class
The source code of the Sync class is as follows:
abstract static class Sync extends AbstractQueuedSynchronizer { // serial number private static final long serialVersionUID = -5179523762034025860L; // Acquire lock abstract void lock(); // Unfair access final boolean nonfairTryAcquire(int acquires) { // Current thread final Thread current = Thread.currentThread(); // Get status int c = getState(); if (c == 0) { // Indicates that no thread is competing for the lock if (compareAndSetState(0, acquires)) { // Compare and set the status to acquire successfully. The status 0 indicates that the lock is not occupied // Set current thread exclusive setExclusiveOwnerThread(current); return true; // success } } else if (current == getExclusiveOwnerThread()) { // If the current thread owns the lock int nextc = c + acquires; // Increase the number of reentries if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); // Set status setState(nextc); // success return true; } // fail return false; } // When trying to get object state in shared mode, this method should query whether it is allowed to get object state in shared mode, and if so, get it protected final boolean tryRelease(int releases) { int c = getState() - releases; if (Thread.currentThread() != getExclusiveOwnerThread()) // The current thread is not exclusive throw new IllegalMonitorStateException(); // Throw exception // Release ID boolean free = false; if (c == 0) { free = true; // Released, clear exclusive setExclusiveOwnerThread(null); } // Set identity setState(c); return free; } // Determine whether the resource is occupied by the current thread protected final boolean isHeldExclusively() { return getExclusiveOwnerThread() == Thread.currentThread(); } // A condition for Freshmen final ConditionObject newCondition() { return new ConditionObject(); } // Return the occupation thread of the resource final Thread getOwner() { return getState() == 0 ? null : getExclusiveOwnerThread(); } // Return to status final int getHoldCount() { return isHeldExclusively() ? getState() : 0; } // Whether the resource is occupied final boolean isLocked() { return getState() != 0; } /** * Reconstitutes the instance from a stream (that is, deserializes it). */ // Custom deserialization logic private void readObject(java.io.ObjectInputStream s) throws java.io.IOException, ClassNotFoundException { s.defaultReadObject(); setState(0); // reset to unlocked state } }
The Sync class has the following methods and functions.
- NonfairSync class
The NonfairSync class inherits the Sync class, which means that the unfair policy is adopted to obtain the lock. It implements the abstract lock method in the Sync class. The source code is as follows:
// Unfair lock static final class NonfairSync extends Sync { // Version No private static final long serialVersionUID = 7316153563782823691L; // Acquire lock final void lock() { if (compareAndSetState(0, 1)) // Compare and set the status successfully. Status 0 means the lock is not occupied // Set the current thread to exclusive lock setExclusiveOwnerThread(Thread.currentThread()); else // Lock already occupied, or set failed // Get object in exclusive mode, ignore interrupt acquire(1); } protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); } }
Note: from the source code of lock method, we can see that every time we try to acquire a lock, we will not wait according to the principle of fair waiting, and let the thread with the longest waiting time acquire the lock.
- FairSyn class
The FairSync class also inherits the Sync class, which means that fair policy is adopted to acquire locks. It implements the abstract lock method in the Sync class. The source code is as follows:
// Fair lock static final class FairSync extends Sync { // Version serialization private static final long serialVersionUID = -3000897897090466540L; final void lock() { // Get object in exclusive mode, ignore interrupt acquire(1); } // Try to get lock fairly protected final boolean tryAcquire(int acquires) { // Get current thread final Thread current = Thread.currentThread(); // Get status int c = getState(); if (c == 0) { // Status is 0 if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { // There are no threads that have been waiting longer and compare and set the status successfully // Set current thread exclusive setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { // If the thread occupying the lock is the current thread. // Next state int nextc = c + acquires; if (nextc < 0) // The representation range of int is exceeded throw new Error("Maximum lock count exceeded"); // Set status setState(nextc); return true; } return false; } }
Note: tracking the source code of the lock method shows that when the resource is idle, it will always first determine whether there is a thread with a longer waiting time in the sync queue (data structure in AbstractQueuedSynchronizer). If there is one, the thread will be added to the end of the waiting queue to realize the principle of fair access. Among them, the method call of the lock of FairSync class is as follows, only the main methods are given.
Note: it can be seen that as long as the resource is occupied by other threads, the thread will be added to the tail of sync queue without trying to get the resource first. This is also the biggest difference between non air and non air. Every time, non air attempts to obtain resources. If the resources are released at this time, they will be acquired by the current thread, which causes an unfair phenomenon. When the acquisition is not successful, they will be added to the end of the queue.
3. Acquire lock
ReentrantLock lock=new ReentrantLock(); lock.lock();
1.lock()
The above code is a common way for us to obtain locks: call the lock() method of ReentrantLock. By default, ReentrantLock is an unfair lock. The class name is non fairsync, which belongs to the internal class of ReentrantLock. Let's see the source code:
static final class NonfairSync extends Sync { private static final long serialVersionUID = 7316153563782823691L; final void lock() { if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); } protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); } }
Here, the lock method first changes the value of state from 0 to 1 through CAS operation. If it succeeds, it sets the value of exclusiveownerthread to the current thread. ReentrantLock uses exclusiveownerthread to represent "thread holding lock". If the setting fails, a thread with state > 0 already holds the lock. At this time, execute acquire(1) to request the lock again.
2.acquire()
public final void acquire(int arg) { if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }
The acquire method step is to request exclusive lock, ignore all interrupts, execute tryAcquire at least once, and return if it succeeds, otherwise the thread will enter the blocking wake-up state switch until tryAcquire succeeds.
We analyze tryAcquire(), addWaiter(), acquireQueued() one by one.
In this method, try acquire (ARG) is executed first:
3.tryAcquire()
final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); if (c == 0) { if (compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }
First, judge whether the state is 0. If it is 0, execute the first half of the lock method mentioned above. Change the value of the state from 0 to 1 through CAS operation. Otherwise, judge whether the current thread is exclusive owner thread. Then, state + +, that is, the embodiment of reentry lock. We notice that the first half is to ensure synchronization through CAS, and the second half is not. The reason is that:
The second half is the operation triggered when the thread re enters and obtains the lock again. At this time, the current thread has the lock, so it is unnecessary to lock the property operation of ReentrantLock.
If tryAcquire() fails to get, execute addWaiter() to add an exclusive node to the waiting queue
4.addWaiter()
private Node addWaiter(Node mode) { Node node = new Node(Thread.currentThread(), mode); Node pred = tail; if (pred != null) { node.prev = pred; if (compareAndSetTail(pred, node)) { pred.next = node; return node; } } enq(node); return node; }
Note to this method: create a queued node as the current thread, Node.EXCLUSIVE is the exclusive lock, and Node.SHARED is the shared lock.
First find the tail node pred of the waiting queue. If pred! =null, add the current thread after pred and enter the waiting queue. If there is no tail node, enq() will be executed
private Node enq(final Node node) { for (;;) { Node t = tail; if (t == null) { // Must initialize if (compareAndSetHead(new Node())) tail = head; } else { node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; return t; } } } }
In this loop, if there is a tail at this time, the last step of adding a tail is performed. If there is still no tail, the current thread is taken as the head node.
After inserting the node, call acquireQueued() to block
5.acquireQueued()
final boolean acquireQueued(final Node node, int arg) { boolean failed = true; try { boolean interrupted = false; for (;;) { final Node p = node.predecessor(); if (p == head && tryAcquire(arg)) { setHead(node); p.next = null; // help GC failed = false; return interrupted; } if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) interrupted = true; } } finally { if (failed) cancelAcquire(node); } }
First, obtain the previous node p of the current node. If p is head, try acquire (ARG) again. If it succeeds, return it. Otherwise, execute shouldParkAfterFailedAcquire, parkAndCheckInterrupt to achieve blocking effect;
6.shouldParkAfterFailedAcquire()
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { int ws = pred.waitStatus; if (ws == Node.SIGNAL) return true; if (ws > 0) { do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else { compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; }
For the new node constructed by addWaiter(), the default value of waitStatus is 0. At this time, enter the last if judgment. CAS sets pred.waitStatus to SIGNAL==-1. Finally, false is returned.
After returning to step 5 acquireQueued(), the loop will continue because shouldParkAfterFailedAcquire() returns false. Assuming that the node's predecessor node PRED is still not the header node or the lock acquisition fails, it will enter shouldParkAfterFailedAcquire() again. In the previous cycle, pred.waitStatus has been set to SIGNAL==-1, then the first judging condition will be entered this time, and a direct return of true indicates that it should be blocked.
7.parkAndCheckInterrupt()
private final boolean parkAndCheckInterrupt() { LockSupport.park(this); return Thread.interrupted(); }
Obviously, once shouldParkAfterFailedAcquire returns true, which means it should be blocked, parkAndCheckInterrupt() will be executed to achieve the blocking effect. At this time, the thread is blocked here, and other threads are required to wake up. After waking up, the request logic in acquirequeueued step 5 will be cycled again.
Let's go back to step 6 and see a piece of logic left behind
if (ws > 0) { do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; }
When will we encounter the case of WS > 0? When the retrieval request maintained by pred is CANCELLED (that is, the waitStatus value of node is cancel), then all the previous node preds that have been CANCELLED will be removed circularly until they are found. After removing all the previous nodes that have been CANCELLED, false is returned directly.
8.cancelAcquire()
Here we go back to step 5 and see that the main logic is basically finished. In the finally of this method, there is a cancelAcquire() method
private void cancelAcquire(Node node) { if (node == null) return; node.thread = null; Node pred = node.prev; while (pred.waitStatus > 0) node.prev = pred = pred.prev; Node predNext = pred.next; node.waitStatus = Node.CANCELLED; if (node == tail && compareAndSetTail(node, pred)) { compareAndSetNext(pred, predNext, null); } else { int ws; if (pred != head && ((ws = pred.waitStatus) == Node.SIGNAL || (ws <= 0 && compareAndSetWaitStatus(pred, ws, Node.SIGNAL))) && pred.thread != null) { Node next = node.next; if (next != null && next.waitStatus <= 0) compareAndSetNext(pred, predNext, next); } else { unparkSuccessor(node); } node.next = node; // help GC } }
That is to say, during the execution of step 5, if there is an exception or an interrupt, the request operation of finally canceling the thread will be executed. The core code is node.waitstatus = node.cancel LED; change the thread status to cancel led.
4. Release lock
1.release()
public final boolean release(int arg) { if (tryRelease(arg)) { Node h = head; if (h != null && h.waitStatus != 0) unparkSuccessor(h); return true; } return false; }
Release the exclusive lock. If tryRelease successfully returns true, it will unblock the waiting thread
Obviously, the tryRelease method is used to release the lock. If the release is successful, first determine whether the head node is valid, and finally unparkSuccessor starts the waiting thread.
2.tryRelease()
protected final boolean tryRelease(int releases) { int c = getState() - releases; if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; if (c == 0) { free = true; setExclusiveOwnerThread(null); } setState(c); return free; }
First obtain the state minus the release once, and then judge whether the current thread is consistent with the thread holding the lock. If not, throw an exception, and continue to judge the value of the state. Only when the value is 0, the free flag is set to true. Otherwise, it means that the lock is re entered and needs to be released several times until the state is 0.
3.unparkSuccessor()
private void unparkSuccessor(Node node) { int ws = node.waitStatus; if (ws < 0) compareAndSetWaitStatus(node, ws, 0); Node s = node.next; if (s == null || s.waitStatus > 0) { s = null; for (Node t = tail; t != null && t != node; t = t.prev) if (t.waitStatus <= 0) s = t; } if (s != null) LockSupport.unpark(s.thread); }
This method name: start the follow-up thread, first get the waitStatus of the head node and clear it, then get the next node, and check it. If the next node fails, poll from the end of the waiting queue, get the first valid node, and then pass LockSupport.unpark(s.thread); wake up, and let the thread re-enter the fifth step of getting the lock to acquire the lock.
3, Synchronized vs. Lock1. Similarities:
- Coordinate the simultaneous access of multithreads to shared objects and variables.
- Reentrant, the same thread can obtain the same lock multiple times.
- Both ensure visibility and mutual exclusion.
- Can form deadlock.
2. Differences
- ReentrantLock shows the acquisition and release of lock, while synchronized implicitly obtains the release lock.
- ReentrantLock can respond to interrupts and rotations, while synchronized cannot respond to interrupts.
- ReentrantLock can be obtained and released manually, which is more flexible. synchronized automatic release lock is more convenient and safe.
- ReentrantLock is API level and synchronized is JVM level.
- ReentrantLock can implement fair lock and unfair lock, while synchronized can only implement unfair lock.
- ReentrantLock can get the lock status, so you can judge whether the lock is successfully obtained. synchronized cannot get the lock status.
- ReentrantLock can bind multiple conditions through Condition, which can easily solve the problems of producers and consumers.
- When ReentrantLock is abnormal, if it is not used to actively release the lock, it may cause deadlock, and if it is abnormal, it will automatically release the lock.
- ReentrantLock is implemented by AQS, while synchronized is implemented by two bytecode instructions, monitorenter and monitorexit.
- ReentrantLock can achieve a time limited wait to acquire the lock through the tryLock method.
- To be added