Alibaba talk to you about some JVM level locks of Java

brief introduction In the computer industry, there is a l...

brief introduction

In the computer industry, there is a law called "Moore's law". Under this law, the performance of the computer is advancing rapidly, and the price is also getting cheaper and cheaper. The CPU is from single core to multi-core, and the cache performance is also greatly improved, especially the arrival of multi-core CPU technology, the computer can handle multiple tasks at the same time. In the great improvement of efficiency brought by the development of hardware level, multithreaded programming at software level has become an inevitable trend. However, multithreaded programming will introduce data security problems, so "lock" is invented to solve the problem of thread security. In this article, several classic JVM level locks in Java are summarized.

synchronized

The synchronized keyword is a classic lock, which we usually use most. Before JDK 1.6, syncronized was a heavyweight lock. However, with the upgrading of JDK, it has been continuously optimized. Now it has become less heavy. Even in some scenarios, its performance is better than that of lightweight locks. In the method and code block with synchronized keyword, only one thread is allowed to enter a specific code segment at a time, so as to avoid multiple threads modifying the same data at the same time.

The synchronized lock has the following characteristics:

Lock upgrade process

Before JDK 1.5 (included), the underlying implementation of synchronized was heavyweight, so it was called "heavyweight lock" before. After JDK 1.5, various optimizations were made to synchronized, which became less heavy. The implementation principle is the process of lock upgrade. Let's talk about the implementation principle of synchronized after 1.5. When it comes to the synchronized locking principle, we have to talk about the layout of Java objects in memory. The memory layout of Java objects is as follows:

As shown in the figure above, after creating an object, the storage layout of the object in Java Memory in the JVM virtual machine (HotSpot) can be divided into three parts:

The information stored here in the object header area consists of two parts:

1. Runtime data of the object itself (MarkWord)

Storing hashCode, GC generation age, lock type mark, biased lock thread ID, CAS lock pointer to thread LockRecord, etc., the mechanism of synchronized lock is closely related to this part (markwork). The lowest three bits in markword represent the lock state, one of which is biased lock bit, and the other two are ordinary lock bit.

2. Class Pointer

Object pointer to its Class metadata, through which the JVM determines which Class instance it is.

Instance data area

What is stored here is the real effective information of the object, such as the content of all fields in the object

Align filled areas

The implementation of the JVM specifies that the starting address of the object must be an integer multiple of 8 bytes. In other words, when 64 bit OS reads data out, it reads the data of 64 bit integer multiple at one time, that is, 8 bytes. So in order to read the object efficiently, HotSpot has made "alignment". If the actual memory size occupied by an object is not 8 bytes When it's a multiple of, it's a multiple of 8 bytes. So the size of the aligned fill area is not fixed.

When a thread enters synchronized and tries to acquire the lock, the upgrade process of synchronized lock is as follows:

As shown in the figure above, the order of synchronized lock upgrade is: biased lock - > lightweight lock - > heavyweight lock. Each step triggers lock upgrade as follows:

Bias lock

In JDK 1.8, the default is lightweight lock, but if - XX:BiasedLockingStartupDelay = 0 is set, a biased lock will be immediately applied when an Object is synchronized. When in a biased lock state, markwork records the current thread ID.

Upgrade to lightweight lock

When the next thread participates in the biased lock competition, it will first determine whether the thread ID saved in markword is equal to this thread ID. if not, it will immediately revoke the biased lock and upgrade to a lightweight lock. Each thread generates a lock record (LR) in its own thread stack, and then each thread sets the markwork in the lock object header as a pointer to its LR through CAS (spin) operation. If any thread is set successfully, it means the lock is obtained. The CAS operation executed at this time in synchronized is realized by calling the C + + code of bytecodeInterpreter.cpp file in HotSpot from native. If you are interested, you can continue to dig.

Upgrade to heavyweight lock

If the lock competition intensifies (for example, the number of threads spinning or the number of threads spinning exceeds a certain threshold, after JDK 1.6, the JVM controls the rule itself), the lock will be upgraded to a heavyweight lock. At this time, the resource will be applied to the operating system, the thread will be suspended, enter the waiting queue in the kernel state of the operating system, wait for the scheduling of the operating system, and then map back to the user state. In heavyweight lock, it takes a lot of time to transform kernel state to user state, which is one of the reasons of "heavyweight".

Reentrant

Synchronized has a mandatory atomic internal lock mechanism, which is a reentrant lock. Therefore, when a thread uses the synchronized method, it can call another synchronized method of the object, that is, when a thread gets an object lock and requests the object lock again, it can always get the lock. In Java, the operation of thread obtaining object lock is based on thread, not call. The thread holder and counter of the synchronized lock will be recorded in the markwork of the object header. When a thread request is successful, the JVM will record the thread holding the lock and count the counter as 1. At this time, if other threads request the lock, they must wait. If the thread holding the lock requests the lock again, it can get the lock again, and the counter will increase. When a thread exits a synchronized method / block, the counter decrements, and if the counter is 0, the lock is released.

Pessimistic lock (exclusive lock, exclusive lock)

synchronized is a pessimistic lock (exclusive lock). If the current thread obtains the lock, it will cause all other threads that need to lock the lock to wait, waiting for the thread holding the lock to release the lock before continuing to compete for the lock.

ReentrantLock

ReentrantLock can be seen literally as a reentrant lock, which is the same as synchronized, but its implementation principle is also very different from synchronized. It is based on the classic AQS (Abstract queue synchronized), AQS is implemented on the basis of volitalie and CAS. In AQS, a variable state of type valitale is maintained to make a reentry number of locks. Locking and releasing locks are also carried out around this variable. ReentrantLock also provides some features that synchronized does not have, so it is better to use than synchronized.

AQS model is shown as follows

ReentrantLock has the following features:

1. Reentrant

ReentrantLock and synchronized keywords are all reentrant locks, but their implementation principles are slightly different. Rerantlock uses the state state of AQS to determine whether the resource is locked. If the same thread is locked again, the state of state + 1; if the same thread is unlocked again, the state-1 (unlocking must be the current exclusive thread, otherwise exception); Unlocking succeeded when state is 0.

2. Need to lock and unlock manually

The synchronized keyword is automatically locked and unlocked, while ReentrantLock needs lock() and unlock() methods to cooperate with try/finally statement block to complete, to manually lock and unlock.

3. Support setting timeout of lock

synchronized keyword can't set the timeout of lock. If a deadlock occurs inside a thread obtaining lock, other threads will always be blocked. ReentrantLock provides the tryLock method, which allows setting the timeout of thread acquiring lock. If the timeout occurs, skip and do nothing to avoid deadlock.

4. Support fair / unfair lock

The synchronized keyword is an unfair lock. The thread that grabs the lock first executes it first. In the construction method of ReentrantLock, it is allowed to set true/false to realize fair and unfair locks. If it is set to true, the rule of "first in, then in" will be followed when the thread acquires the lock. Each time, a thread Node will be constructed, and then it will be queued behind the "tail" of the two-way linked list, waiting for the previous Node to release the lock resources.

5. Interruptible lock

The lockinterruptability () method in ReentrantLock allows a thread to respond to an interrupt when it is blocked. For example, one thread t1 obtains a reentrant lock through the lockinterruptability () method and executes a long-time task. Another thread can immediately interrupt the execution of the t1 thread through the interrupt() method to obtain the reentrant lock held by t1. The thread holding the lock through ReentrantLock or Synchronized will not respond to the interrupt() method of other threads until the method releases the lock actively.

ReentrantReadWriteLock

ReentrantReadWriteLock is actually two locks, one is WriteLock, the other is read lock and ReadLock. The rules of read-write lock are: read and write are not mutually exclusive, read and write are mutually exclusive, and write and write are mutually exclusive. In some practical scenarios, the frequency of read operation is much higher than that of write operation. If the common lock is used directly for concurrency control, the read-write mutual exclusion, the read-write mutual exclusion and the write write mutual exclusion will be inefficient. The generation of the read-write lock is to optimize the operation efficiency of this scenario. In general, the low efficiency of exclusive lock comes from the high concurrency and the fierce competition for critical area, which leads to thread context switching. Therefore, when the concurrency is not very high, the read-write lock may not be as efficient as the exclusive lock due to the need to maintain the state of the read-write lock, so it needs to be selected according to the actual situation.

The principle of ReentrantReadWriteLock is also based on AQS. The difference between ReentrantReadWriteLock and ReentrantLock is that ReentrantReadWriteLock has shared lock and exclusive lock properties. Lock adding and lock releasing in read-write lock are also based on Sync (inherited from AQS), and are mainly implemented by state in AQS and waitState in node. The main difference between the implementation of read-write lock and the implementation of common mutually exclusive lock is that the read-write lock status and write lock status need to be recorded respectively, and two lock operations need to be handled differently in the waiting queue. ReentrantReadWriteLock divides the state of int type in AQS into high 16 bits and 16th bits to record the state of read lock and write lock, as shown in the following figure:

Writelock is a pessimistic lock (exclusive lock, exclusive lock)

By calculating state & ((1 < < 16) - 1), all the high 16 bits of state are erased, so the low position of state records the re-entry count of write lock.

Get write lock source code:

/**

* Get write lock

Acquires the write lock.

* If no thread holds write lock or read lock at this time, the current thread performs CAS operation to update status,

* If the update is successful, set the number of read lock re entrances to 1 and return immediately

* <p>Acquires the write lock if neither the read nor write lock

* are held by another thread

* and returns immediately, setting the write lock hold count to

* one.

* If the current thread already holds the write lock, set the number of times the write lock is held to 1 and return immediately

* <p>If the current thread already holds the write lock then the

* hold count is incremented by one and the method returns

* immediately.

* If the lock has been held by another thread, stop the CPU scheduling of the thread and enter the sleep state,

* Only when the write lock is released and the number of holding times of the write lock is set to 1 successfully can the write lock be acquired successfully

* <p>If the lock is held by another thread then the current

* thread becomes disabled for thread scheduling purposes and

* lies dormant until the write lock has been acquired, at which

* time the write lock hold count is set to one.

*/

public void lock() {

sync.acquire(1);

}

/**

* This method obtains the lock in exclusive mode, ignoring the interrupt

* If the "tryAcquire" method is called once to update the status successfully, it will be returned directly, which means that the lock is snatched successfully

* Otherwise, it will enter the synchronization queue to wait, and continue to execute the "tryAcquire" method to try CAS to update the status status until the lock is successfully seized

* Among them, "tryAcquire" method has its own implementation in non fairsync (fair lock) and fairsync (unfair lock)

*

* Acquires in exclusive mode, ignoring interrupts. Implemented

* by invoking at least once {@link #tryAcquire},

* returning on success. Otherwise the thread is queued, possibly

* repeatedly blocking and unblocking, invoking {@link

* #tryAcquire} until success. This method can be used

* to implement method {@link Lock#lock}.

*

* @param arg the acquire argument. This value is conveyed to

* {@link #tryAcquire} but is otherwise uninterpreted and

* can represent anything you like.

*/

public final void acquire(int arg) {

if (!tryAcquire(arg) &&

acquireQueued(addWaiter(Node.EXCLUSIVE), arg))

selfInterrupt();

}

protected final Boolean tryAcquire(int acquires) {

/*

* Walkthrough:

* 1,If the count of read and write locks is not 0 and the thread holding the lock is not the current thread, false is returned

* 1. If read count nonzero or write count nonzero

* and owner is a different thread, fail.

* 2,If the count of held locks is not 0 and the total count exceeds the maximum limit, false is also returned

* 2. If count would saturate, fail. (This can only

* happen if count is already nonzero.)

* 3,If the lock is reentrant or the thread's policy in the queue is to allow it to attempt to seize the lock, the thread can acquire the lock

* 3. Otherwise, this thread is eligible for lock if

* it is either a reentrant acquire or

* queue policy allows it. If so, update state

* and set owner.

*/

Thread current = Thread.currentThread();

//Get the status of the read-write lock

int c = getState();

//Get the number of times the write lock is reentered

int w = exclusiveCount(c);

//If the read-write lock status is not 0, another thread has acquired the read lock or write lock

if (c != 0) {

//If the number of write lock re entrances is 0, a thread obtains the read lock, and returns false according to the principle of "mutual exclusion of read and write locks"

//Or if the number of write lock re entrances is not 0 and the thread obtaining the write lock is not the current thread, false will be returned according to the "write lock exclusive" principle

// (Note: if c != 0 and w == 0 then shared count != 0)

if (w == 0 || current != getExclusiveOwnerThread())

return false;

//If the write lock can be re entered more than the maximum number (65535), throw an exception

if (w + exclusiveCount(acquires) > MAX_COUNT)

throw new Error("Maximum lock count exceeded");

//This indicates that the thread is a reenter write lock, update the count of the reenter write lock (+ 1), and return true

// Reentrant acquire

setState(c + acquires);

return true;

}

//If the read-write lock status is 0, it means that neither the read-write lock nor the write lock has been acquired. The following two branches will be taken:

//false if blocking or CAS operation fails to update the status of read-write lock

//If there is no need to block and CAS operation succeeds, the current thread obtains the lock successfully. Set the owner of the lock as the current thread, and return true

if (writerShouldBlock() ||

!compareAndSetState(c, c + acquires))

return false;

setExclusiveOwnerThread(current);

return true;

}

Release write lock source code:

/*

* Note that tryRelease and tryAcquire can be called by

* Conditions. So it is possible that their arguments contain

* both read and write holds that are all released during a

* condition wait and re-established in tryAcquire.

*/

protected final Boolean tryRelease(int releases) {

//Throw an exception if the lock holder is not the current thread

if (!isHeldExclusively())

throw new IllegalMonitorStateException();

//Write lock's reentrant count minus releases

int nextc = getState() - releases;

//If the write lock reentry count is 0, the write lock is released

Boolean free = exclusiveCount(nextc) == 0;

if (free)

//If the write lock is released, set the owner of the lock to null for GC

setExclusiveOwnerThread(null);

//Update write lock reentry count

setState(nextc);

return free;

}

Readlock is a shared lock

Through the calculation of state > > > 16, the unsigned complement is made, and the right shift is 16 bits. Therefore, the high position of state records the reentry count of write lock

The process of acquiring lock by read lock is slightly more complicated than that by write lock. First, judge whether the write lock is 0 and the current thread does not occupy the exclusive lock, and return directly. Otherwise, judge whether the read thread needs to be blocked and whether the number of read locks is less than the maximum and compare the setting status successfully. If there is no read lock at present, set the first read thread firstReader and firstReaderHoldCount. If there is no read lock, set the first read thread firstReader and firstReaderHoldCount If the current thread is the first read thread, increase the firstReaderHoldCount; otherwise, the value of the HoldCounter object corresponding to the current thread will be set. After the update is successful, the current thread reentry number will be recorded in the copy of readHolds (thread local type) in the firstReaderHoldCount. This is to implement the getReadHoldCount() method added in JDK1.6 Three methods can get the number of times that the current thread re enters the shared lock (the total number of multiple threads re enters recorded in state). Adding this method makes the code more complicated, but its principle is simple: if there is only one thread at present, you don't need to use ThreadLocal to directly store the re enters into the member variable firstReaderHoldCount, when there is a second When threads come, they need to use the ThreadLocal variable readHolds. Each thread has its own copy to save its own reentry number.

Get read lock source code

/**

* Get read lock

* Acquires the read lock.

* If the write lock is not held by other threads, perform CAS operation to update the status value, and return immediately after acquiring the read lock

* <p>Acquires the read lock if the write lock is not held by

* another thread and returns immediately.

*

* If the write lock is held by another thread, stop the CPU scheduling of the thread and enter the sleep state until the read lock is released

* <p>If the write lock is held by another thread then

* the current thread becomes disabled for thread scheduling

* purposes and lies dormant until the read lock has been acquired.

*/

public void lock() {

sync.acquireShared(1);

}

/**

* The method is to acquire read lock in shared mode and ignore interrupt

* If the "tryAcquireShared" method is called once to update the status successfully, it will be returned directly, which means that the lock is snatched successfully

* Otherwise, it will enter the synchronization queue to wait, and continue to execute the "tryAcquireShared" method to try CAS to update the status status until the lock is successfully seized

* Among them, "tryAcquireShared" method has its own implementation in non fairsync (fair lock) and fairsync (unfair lock)

* (See if the annotation is symmetrical to the write lock.)

* Acquires in shared mode, ignoring interrupts. Implemented by

* first invoking at least once {@link #tryAcquireShared},

* returning on success. Otherwise the thread is queued, possibly

* repeatedly blocking and unblocking, invoking {@link

* #tryAcquireShared} until success.

*

* @param arg the acquire argument. This value is conveyed to

* {@link #tryAcquireShared} but is otherwise uninterpreted

* and can represent anything you like.

*/

public final void acquireShared(int arg) {

if (tryAcquireShared(arg) < 0)

doAcquireShared(arg);

}

protected final int tryAcquireShared(int unused) {

/*

* Walkthrough:

* 1,If another thread has acquired the write lock, according to the principle of "read write mutual exclusion", the lock grabbing fails, and - 1 is returned

* 1.If write lock held by another thread, fail.

* 2,If the thread itself holds a write lock, see if you want reader shouldblock. If you don't need to block it,

* Then perform CAS operation to update state and reentry count.

* It should be noted that the above steps do not check whether it is reentrant (because read lock belongs to shared lock, which is naturally supported)

* 2. Otherwise, this thread is eligible for

* lock wrt state, so ask if it should block

* because of queue policy. If not, try

* to grant by CASing state and updating count.

* Note that step does not check for reentrant

* acquires, which is postponed to full version

* to avoid having to check hold count in

* the more typical non-reentrant case.

* 3,If step 2 fails due to CAS update status failure or reentry count exceeding the maximum value

* Then enter the fullTryAcquireShared method to perform a dead cycle until the lock is successfully snatched

* 3. If step 2 fails either because thread

* apparently not eligible or CAS fails or count

* saturated, chain to version with full retry loop.

*/

//The thread currently trying to acquire the read lock

Thread current = Thread.currentThread();

//Get the read-write lock status

int c = getState();

//If a thread obtains a write lock, and the thread that obtains the write lock is not the current thread, it returns failure

if (exclusiveCount(c) != 0 &&

getExclusiveOwnerThread() != current)

return -1;

//Get reenter count of read lock

int r = sharedCount(c);

//If the read thread should not be blocked, and the re-entry count is less than the maximum value, and CAS performs the read lock re-entry count + 1 success, then the thread re-entry count plus 1 operation returns success

if (!readerShouldBlock() &&

r < MAX_COUNT &&

compareAndSetState(c, c + SHARED_UNIT)) {

//If no thread has yet acquired a read lock, set firstReader to the current thread and firstReaderHoldCount to 1

if (r == 0) {

firstReader = current;

firstReaderHoldCount = 1;

} else if (firstReader == current) {

//If firstReader is the current thread, add 1 to the reentry count variable firstReaderHoldCount of firstReader

firstReaderHoldCount++;

} else {

//Otherwise, at least two threads share the read lock and get the shared lock re-entry counter HoldCounter

//Get the thread variable cachedHoldCounter of the current thread from HoldCounter, and add 1 to the reentry count of this thread

HoldCounter rh = cachedHoldCounter;

if (rh == null || rh.tid != getThreadId(current))

cachedHoldCounter = rh = readHolds.get(); else if (rh.count == 0)

readHolds.set(rh);

rh.count++;

}

return 1;

}

//If one of the above if conditions is not satisfied, enter this method to perform a loop re acquisition

return fullTryAcquireShared(current);

}

/**

* The full method used to handle CAS operation state failure and tryAcquireShared's failure to perform the get reentrant lock action (compensation method? )

* Full version of acquire for reads, that handles CAS misses

* and reentrant reads not dealt with in tryAcquireShared.

*/

final int fullTryAcquireShared(Thread current) {

/*

* This code is partially similar to the code in tryAcquireShared,

* But on the whole, it's simpler, because it doesn't make complicated judgments between tryAcquireShared and retry and delay read hold counts

* This code is in part redundant with that in

* tryAcquireShared but is simpler overall by not

* complicating tryAcquireShared with interactions between

* retries and lazily reading hold counts.

*/

HoldCounter rh = null;

//Dead cycle

for (;;) {

//Get read / write lock status

int c = getState();

//If a thread gets a write lock

if (exclusiveCount(c) != 0) {

//If the thread obtaining the write lock is not the current thread, return failure

if (getExclusiveOwnerThread() != current)

return -1;

// else we hold the exclusive lock; blocking here

// would cause deadlock.

} else if (readerShouldBlock()) {

//If no thread obtains the write lock, and the read thread is blocked

// Make sure we're not acquiring read lock reentrantly

//If the current thread is the first thread to acquire the read lock

if (firstReader == current) {

// assert firstReaderHoldCount > 0;

} else {

//If the current thread is not the first thread to acquire a read lock (that is, at least one thread has acquired a read lock)

//

if (rh == null) {

rh = cachedHoldCounter;

if (rh == null || rh.tid != getThreadId(current)) {

rh = readHolds.get();

if (rh.count == 0)

readHolds.remove();

}

}

if (rh.count == 0)

return -1;

}

}

/**

*The following is the case where no thread obtains the write lock and the current thread does not need to block

*/

//The number of reentry is equal to the maximum number of reentry, throwing exception

if (sharedCount(c) == MAX_COUNT)

throw new Error("Maximum lock count exceeded");

//If the CAS operation succeeds, add 1 to the re-entry count of the read-write lock, then add 1 to the re-entry count of the thread currently holding the shared read lock, and return success

if (compareAndSetState(c, c + SHARED_UNIT)) {

if (sharedCount(c) == 0) {

firstReader = current;

firstReaderHoldCount = 1;

} else if (firstReader == current) {

firstReaderHoldCount++;

} else {

if (rh == null)

rh = cachedHoldCounter;

if (rh == null || rh.tid != getThreadId(current))

rh = readHolds.get(); else if (rh.count == 0)

readHolds.set(rh);

rh.count++;

cachedHoldCounter = rh;

// cache for release

}

return 1;

}

}

}

Release read lock source code:

/**

* Releases in shared mode. Implemented by unblocking one or more

* threads if {@link #tryReleaseShared} returns true.

*

* @param arg the release argument. This value is conveyed to

* {@link #tryReleaseShared} but is otherwise uninterpreted

* and can represent anything you like.

* @return the value returned from {@link #tryReleaseShared}

*/

public final Boolean releaseShared(int arg) {

if (tryReleaseShared(arg)) {

//Attempt to release a shared lock count

doReleaseShared();

//Real release lock

return true;

}

return false;

}

/**

*This method indicates that the read lock thread releases the lock.

*First, judge whether the current thread is the first reader,

*If so, determine whether the number of resources occupied by the first reader holdcount is 1,

If it is, set the first reader of the first read thread to null, otherwise, reduce the number of resources occupied by the first reader holdcount by 1;

If the current thread is not the first read thread,

First, the cache counter (the counter corresponding to the last read lock thread) will be obtained,

If the counter is empty or the tid is not equal to the tid value of the current thread, the counter of the current thread is obtained,

If the counter count is less than or equal to 1, remove the counter corresponding to the current thread,

If the count count of the counter is less than or equal to 0, an exception will be thrown, and then the count will be reduced.

In either case, it will enter a dead cycle, which ensures that the state state is set successfully

*/

protected final Boolean tryReleaseShared(int unused) {

// Get current thread

Thread current = Thread.currentThread();

if (firstReader == current) {

// Current thread is the first read thread

// assert firstReaderHoldCount > 0;

if (firstReaderHoldCount == 1) // The number of resources occupied by the read thread is 1

firstReader = null; else // Reduced resources

firstReaderHoldCount--;

} else {

// Current thread is not the first read thread

// Get cached counters

HoldCounter rh = cachedHoldCounter;

if (rh == null || rh.tid != getThreadId(current)) // The counter is empty or the tid of the counter is not the tid of the currently running thread

// Get the counter corresponding to the current thread

rh = readHolds.get();

// Acquisition count

int count = rh.count;

if (count <= 1) {

// Count less than or equal to 1

// remove

readHolds.remove();

if (count <= 0) // Count less than or equal to 0, throw an exception

throw unmatchedUnlockException();

}

// Reduce count

--rh.count;

}

for (;;) {

// Dead cycle

// Acquisition state

int c = getState();

// Acquisition state

int nextc = c - SHARED_UNIT;

if (compareAndSetState(c, nextc)) // Compare and set

// Releasing the read lock has no effect on readers,

// but it may allow waiting writers to proceed if

// both read and write locks are now free.

return nextc == 0;

}

}

/**Real release lock

* Release action for shared mode -- signals successor and ensures

* propagation. (Note: For exclusive mode, release just amounts

* to calling unparkSuccessor of head if it needs signal.)

*/

private void doReleaseShared() {

/*

* Ensure that a release propagates, even if there are other

* in-progress acquires/releases. This proceeds in the usual

* way of trying to unparkSuccessor of head if it needs

* signal. But if it does not, status is set to PROPAGATE to

* ensure that upon release, propagation continues.

* Additionally, we must loop in case a new node is added

* while we are doing this. Also, unlike other uses of

* unparkSuccessor, we need to know if CAS to reset status

* fails, if so rechecking.

*/

for (;;) {

Node h = head;

if (h != null && h != tail) {

int ws = h.waitStatus;

if (ws == Node.SIGNAL) {

if (!compareAndSetWaitStatus(h, Node.SIGNAL, 0))

continue;

// loop to recheck cases

unparkSuccessor(h);

} else if (ws == 0 &&

!compareAndSetWaitStatus(h, 0, Node.PROPAGATE))

continue;

// loop on failed CAS

}

if (h == head) // loop if head changed

break;

}

}

Through the analysis, we can see that:

When a thread holds a read lock, it cannot acquire a write lock (because when acquiring a write lock, if it finds that the current read lock is occupied, it immediately fails to acquire it, regardless of whether the read lock is held by the current thread or not).

When a thread holds a write lock, the thread can continue to acquire the read lock (if it is found that the write lock is occupied when acquiring the read lock, only when the write lock is not occupied by the current thread will the acquisition fail).

LongAdder

In the case of high concurrency, we can not guarantee the atomicity of operation when we directly use i + + for an Integer of Integer type, which will lead to the problem of thread safety. For this reason, we will use AtomicInteger under juc, which is an inter class that provides atomic operation, and it also implements thread safety through CAS internally. However, when a large number of threads access at the same time, they will rotate empty because a large number of threads fail to perform CAS operations, which leads to excessive CPU resource consumption and low execution efficiency. Doug Lea is not satisfied, so he optimizes CAS in JDK 1.8 and provides long adder, which is based on the idea of CAS segmented lock.

When a thread reads and writes a variable of type LongAdder, the process is as follows:

LongAdder is also based on CAS operation + valitale provided by Unsafe. A base variable and a cell array are maintained in the parent class Striped64 of LongAdder. When multiple threads operate on a variable, CAS operation will be performed on the base variable first, and cell array will be used when more threads are found. For example, when the base is about to be updated, it is found that there are more threads (that is, calling the casBase method to update the base value fails), then it will automatically use the cell array. Each thread corresponds to a cell, and the cell will be CAS operated in each thread, so that the update pressure of a single value can be shared among multiple values, and the value of a single value can be reduced "Hot", at the same time, it also reduces the idling of a large number of threads, improves the concurrent efficiency and disperses the concurrent pressure. This kind of segmented lock needs to maintain an extra memory cell, but in high concurrency scenarios, this cost can be almost ignored. Segmented lock is an excellent optimization idea. The ConcurrentHashMap provided in juc is also based on segmented lock to ensure the thread safety of read and write operations.

Last

Like the article remember to pay attention to me, thank you for your support!

14 February 2020, 05:49 | Views: 4892

Add new comment

For adding a comment, please log in
or create account

0 comments