Synchronized underlying principle

java object and object header

Locks in java

Monitor heavyweight lock

Monitor is translated as monitor or tube pass

Synchronized is implemented through a called Monitor lock inside the object. However, the essence of Monitor lock depends on the Mutex Lock of the underlying operating system. The operating system realizes the switching between threads, which requires the conversion from user state to core state. This cost is very high, and the conversion between States takes a relatively long time, which is why the synchronized efficiency is low. Therefore, this kind of lock that depends on the implementation of the operating system Mutex Lock is called "heavyweight lock".

Each Java object can be associated with a Monitor object. If synchronized is used to lock the object (heavyweight), the header of the object will be changed
The pointer to the Monitor object is set in Mark Word


When thread 2 executes the critical area code, when it obtains the object lock, it will associate the object obj of the lock with the mointer monitor of the operating system - point the markworld of the object header of obj to the mointer object,

The object obj becomes the owner of the mointer object, sets the Mark Word of the object header as the Monitor object address, changes the lock flag bit to 10, and marks it as a heavyweight lock;

In this case, if there are other threads, first check whether the current object obj is associated with the mointer object. If there is no association (whether there is an owner), spin the thread again to obtain the lock and wait for the release of the lock. If the number of spins is exceeded, put it into the blocking queue entryList and enter the blocking state

After thread 2 executes the critical area code, it releases the lock, vacates the owner, and wakes up other threads in the entryList queue. Other threads continue to compete for locks

Synchronized must be a monitor that enters the same object to have the above effect. Objects that are not synchronized will not be associated with monitors

Heavyweight lock unlock

Find the Monitor object according to the Monitor address, set the Owner to null, and wake up the BLOCKED thread in the EntryList

Synchronized principle

Bytecode level should be

static final Object lock = new Object();
   static int counter = 0;
   public static void main(String[] args) {
      synchronized (lock) {

Corresponding bytecode

public static void main(java.lang.String[]);
descriptor: ([Ljava/lang/String;)V
stack=2, locals=3, args_size=1
0: getstatic #2 / / < - lock reference (started with synchronized)
3: dup
4: astore_1 // lock reference - > slot 1
5: monitorenter // Set the lock object MarkWord as the Monitor pointer
6: getstatic #3 // <- i
9: iconst_1 // Preparation constant 1
10: iadd // +1
11: putstatic #3 // -> i
14: aload_1 // < - lock reference
15: monitorexit // Reset the lock object MarkWord and wake up the EntryList
16: goto 24
19: astore_2 // e -> slot 2
20: aload_1 // < - lock reference
21: monitorexit // Reset the lock object MarkWord and wake up the EntryList
22: aload_2 // <- slot 2 (e)
23: athrow // throw e
24: return
Exception table:
from to target type
6 16 19 any
19 22 19 any
line 8: 0
line 9: 6
line 10: 14
line 11: 24
Start Length Slot Name Signature
0 25 0 args [Ljava/lang/String;
StackMapTable: number_of_entries = 2
frame_type = 255 /* full_frame */
offset_delta = 19
locals = [ class "[Ljava/lang/String;", class java/lang/Object ]
stack = [ class java/lang/Throwable ]
frame_type = 250 /* chop */
offset_delta = 4
  1. MonitorEnter instruction: it is inserted at the beginning of the synchronization code block. When the code executes the instruction, it will try to obtain the ownership of the object Monitor, that is, try to obtain the lock of the object;
  2. MonitorExit instruction: it is inserted at the end of the method and at the exception. The JVM ensures that each MonitorEnter must have a corresponding MonitorExit;

Monitor level

synchronized when multiple threads access an object lock at the same time, the object monitor will store the thread request in different containers

Content list: all threads requesting locks will be placed in the contention queue first
Entry List: threads in the content list that are eligible to be candidates are moved to the Entry List
Wait Set: threads that call the wait method and are blocked are placed in the Wait Set
OnDeck: at most one thread can compete for locks at any time. This thread is called OnDeck
Owner: the thread that obtains the lock is called owner
! Owner: the thread that releases the lock

  • The JVM takes out one data from the tail of the queue each time to lock the contention candidate (OnDeck), but in the case of concurrency, the content list will be accessed by a large number of concurrent threads. In order to reduce the contention for tail elements, the JVM will move some threads to the Entry List as candidate contention threads

  • The Owner thread will migrate some threads in the content list to the Entry List when unlock ed, and specify a thread in the Entry List as an OnDeck thread

  • The Owner · thread does not directly pass the lock to the OnDeck thread, but gives the right of lock competition to OnDeck, which needs to re compete for the lock. Although this sacrifices some fairness, it can greatly improve the system throughput. In the JVM, this behavior is called contention switching

  • Once the OnDeck thread obtains the lock resource, it will become the Owner thread, while the thread that has not obtained the lock still stays in the Entry List. If the Owner thread is blocked by the Object#wait() method, it will be transferred to the Wait Set queue until it wakes up through the Object#notify()/Object#notifyAll() method at a certain time, and will re-enter the Entry List

  • The threads in the content list, Entry List and Wait Set are all in blocking status, which is completed by the operating system

  • synchronized is an unfair lock. When a thread enters the content list, the waiting thread will first try to spin to obtain the lock. If the lock cannot be obtained, it will enter the content list, which is obviously unfair to the threads that have entered the queue, Another unfair thing is that the thread spinning to obtain the lock may also directly preempt the lock resources of the OnDeck thread

  • Each object has a monitor object. Locking is competing for the monitor object. Code block locking is realized by adding monitorenter and monitorexit instructions before and after. Method locking is determined by a flag bit
    synchronized is a heavyweight operation, which needs to call the relevant interfaces of the operating system, and the performance is inefficient. It is possible to lock the thread more than the operating program

  • Fortunately, after JDK 1.6, synchronized has undergone many optimizations to adapt to spin, lock elimination, lock coarsening, lightweight lock, bias lock, etc., which has essentially improved efficiency. The implementation mechanism of synchronized keyword has been optimized in JDK 1.7/JDK 1.8. It is marked in the object header without locking by the operating system

  • Lock upgrade can be performed from biased lock -- > lightweight lock -- > heavyweight lock. This upgrade process is called lock inflation

  • JDK 1.6 turns on the bias lock and lightweight lock by default. You can disable the bias lock through - XX:-UseBiasedLocking

  • The threads in contentlist, EntryList and WaitSet are blocked, and the blocking operation is completed by the operating system (through pthread_mutex_lock function under Linxu). After the thread is blocked, it enters the kernel (Linux) scheduling state, which will cause the system to switch back and forth between user state and kernel state, which will seriously affect the performance of the lock

    • The method to alleviate the above problems is spin. Its principle is: when contention occurs, if the Owner thread can release the lock in a very short time, those competing threads can wait a little (spin). After the Owner thread releases the lock, the competing threads may get the lock immediately, so as to avoid system blocking. However, the running time of the Owner may exceed the critical value, and the contention thread still cannot obtain the lock after spinning for a period of time,
    • At this time, the contention thread will stop spinning and enter the blocking state (backward). The basic idea is to spin, block if unsuccessful, and minimize the possibility of blocking,
    • This is a very important performance improvement for code blocks with very short execution time. Spin lock has a more appropriate name: spin exponential backward lock, that is, composite lock. Obviously, spin makes sense on multiprocessors

Synchronized unfair lock?

synchronized when a thread enters the content list, the waiting thread will first try to spin to obtain the lock. If the lock cannot be obtained, it will enter the content list. This is obviously unfair to the threads that have entered the queue. Another unfair thing is that the thread spinning to obtain the lock may also directly seize the lock resources of the OnDeck thread

Synchronized optimization

The newly added CAS atomic operation of the modern operating system is introduced from JDK5 (the synchronized keyword is not optimized in JDK5, but reflected in J.U.C, so the concurrent package of this version has better performance). Since JDK6, the implementation mechanism of synchronized has been greatly adjusted, including the use of CAS spin introduced from JDK5, and the addition of adaptive CAS spin These optimization strategies include lock elimination, lock coarsening, biased lock and lightweight lock. Because the optimization of this keyword greatly improves the performance, has clear semantics, simple operation and does not need to be closed manually, it is recommended to use this keyword as far as possible if allowed. At the same time, there is room for optimization in performance.

There are four main lock states: no lock state, biased lock state and light lock state

In the heavyweight lock state, the lock can be upgraded from a biased lock to a lightweight lock, and then upgraded to a heavyweight lock. However, lock upgrading is one-way, that is, it can only be upgraded from low to high, and there will be no lock degradation.

No lock: we just instantiated an object

Bias lock: when a single thread, the bias lock will be opened. You can use - XX:-UseBiasedLocking to disable bias locking.

Lightweight lock: when multiple threads compete, the bias lock will be upgraded to a lightweight lock (the internal is a spin lock). Because the lightweight lock thinks that I will get the lock right away, I wait for the thread to release the lock in a spin way

Heavyweight lock: because the lightweight lock is too optimistic, the result is that it can't get the lock, so it spins continuously for a certain number of times. In order to avoid waste of resources, it is upgraded to our final heavyweight lock

In JDK 1.6, the bias lock and lightweight lock are enabled by default. You can disable the bias lock through - XX:-UseBiasedLocking

Lightweight Locking

When a thread obtains a lock, it will associate the object header with the mointer object of the JVM. How to obtain the lock?

Obtain a lightweight lock first, and upgrade to a heavyweight lock after failure, that is, apply for a mointer lock

The scenario of lightweight lock is that threads execute synchronization blocks alternately, and the locking time is staggered (that is, there is no competition). If the same lock is accessed at the same time, it will inevitably lead to the expansion of lightweight lock into heavyweight lock.
Lightweight locks are transparent to the user, that is, the syntax is still synchronized

First, use a lightweight lock to lock. Failure to lock will lead to lock inflation and upgrade to a heavyweight lock

When exiting the synchronized code block (when unlocking), if there is a lock record with a value of null, it indicates that there is reentry. At this time, reset the lock record, indicating that the reentry count is reduced by one

Why copy the Mark Word in the object header to the lock record of the thread stack when upgrading to a lightweight lock?

When applying for an object lock, you need to use this value as the CAS comparison condition. At the same time, when upgrading to a heavyweight lock, you can use this comparison to determine whether the lock has been applied by other threads during the process of holding the lock. If it has been applied by other threads, you need to wake up the suspended thread when releasing the lock.

From the above analysis, it can be concluded that

  • synchronized actually uses object locks to ensure the atomicity of the code in the critical area and ensure that the code in the critical area is inseparable from the outside and will not be interrupted by thread switching.
  • Failed to lock the lightweight lock: other threads have locked the object and entered lock inflation; The current thread acquires the lock again
  • Reentrant: after the current thread obtains the lock, it can obtain the lock here - the lock record counter increases by 1 when the thread accesses for the first time, and then increases successively when the thread obtains the lock again. When leaving, the counter decreases accordingly
  • Exclusive - after the current thread obtains the lock, other threads are blocked and put into the waiting queue
  • Non interruptibility - once a secondary thread obtains a lock, it cannot be interrupted. Only after it releases the lock can other threads get it

Lock expansion

1. Failed to add a lightweight lock: other threads have added a lightweight lock to the object and entered lock inflation. Upgrade from lightweight lock to heavyweight lock

Lock spin optimization

After the lightweight lock fails, the virtual machine will also carry out an optimization method called spin lock in order to avoid the thread hanging at the operating system level.

Generally, the spin will not be too long. It may be 50 cycles or 100 cycles. After several cycles, if it is locked, it will enter the critical region smoothly.

When the spin retry fails, the thread will be suspended at the operating system level if it spins for a certain number of times without waiting for the locked thread to release the lock. This is the optimization method of spin lock. This method can indeed improve efficiency. Finally, there is no way but to upgrade to a heavyweight lock.

Spin will occupy CPU time. Single core CPU spin is a waste, and multi-core CPU spin can give play to its advantages.
After Java 6, the spin lock is adaptive. For example, if the object has just succeeded in a spin operation, it is considered that the possibility of successful spin this time will be high, so spin more times; On the contrary, less spin or even no spin

Bias lock

The lightweight lock still needs to perform CAS operation every time it re enters when there is no competition (just its own thread).
Java 6 introduces biased locking for further optimization: only the first time CAS is used to set the thread ID to the Mark Word header of the object, and then it is found that
This thread ID is its own, which means that there is no competition and there is no need to re CAS. In the future, as long as there is no competition, the object belongs to the thread

Since the lock is obtained by the same thread many times, in order to make the cost of obtaining the lock lower, a biased lock is introduced.

Lightweight locks are designed to improve performance when threads alternately execute synchronous blocks, while biased locks further improve performance when only one thread executes synchronous blocks.

Idea: once the thread obtains the monitoring object for the first time, and then "biases" the monitoring object to this thread, the subsequent multiple calls can avoid CAS operation, and replace the thread Id with the mark word of the object header. If it is found that the thread Id is its own, there is no need to go through various lock / unlock processes.

Bias lock is A mechanism used when A single thread executes A code block. In A multi-threaded concurrent environment (that is, thread A has not finished executing the synchronous code block, and thread B initiates an application for A lock), it will be converted into A lightweight lock or A heavyweight lock.

The locks of the synchronized keyword object start from biased locks. With the continuous upgrading of lock competition, they gradually evolve to lightweight locks, and finally become heavyweight locks.

tatic final Object obj = new Object();
public static void m1() {
	synchronized(obj) {
		// Synchronization block A
public static void m2() {
	synchronized(obj) {
		// Synchronization block B
public static void m3() {
	synchronized(obj) {
		// Synchronization block C

Cancel bias lock
  • Add VM parameter - XX:-UseBiasedLocking disable bias locking

  • The hashCode of the object is called, but the thread id is stored in MarkWord. Calling hashCode will cause the bias lock to be revoked, so calling hashCode will disable the bias lock

    Because when the object is in the lock biased state, the thread ID, epoch, unused, age and biased are recorded in the preamble position of MarkWord_ Lock and other information. There is no hashcode location. Therefore, hashcode and skew lock are mutually exclusive.

  • Lightweight locks record hashCode in the lock record
    The heavyweight lock records the hashCode in the Monitor
    Use bias lock after calling hashCode. Remember to remove - XX:-UseBiasedLocking

  • When other threads use the biased lock object, the biased lock will be upgraded to a lightweight lock

  • Call wait/notify to upgrade the bias lock to a lightweight lock, because wait/notify belongs to a heavyweight lock

Batch re bias

If the object is accessed by multiple threads, but there is no competition, the object biased to thread T1 still has the opportunity to re bias to T2, and the re bias will reset the object
Thread ID of
When the unbiased lock threshold is revoked more than 20 times, the jvm will think, am I biased wrong, so it will re bias to when locking these objects
Lock thread

Batch undo

When the unbiased lock threshold is revoked more than 40 times, the jvm will feel that it is really biased wrong and should not be biased at all. So all objects of the whole class
Will become non biased, and the new object is also non biased

Lock elimination

Lock elimination is another lock optimization of the virtual machine. This optimization is more thorough. During JIT compilation (which can be simply understood as compiling when a piece of code is about to be executed for the first time, also known as immediate compilation), the Java virtual machine removes locks that cannot compete with shared resources by scanning the running context. In this way, unnecessary locks are eliminated, It can save meaningless request lock time. As follows, the append of StringBuffer is a synchronization method, but the StringBuffer in the add method belongs to a local variable and will not be used by other threads. Therefore, there can be no competition for shared resources in StringBuffer, and the JVM will automatically eliminate its lock.

public class MyBenchmark {
   static int x = 0;
   public void a() throws Exception {
  public void b() throws Exception {
   Object o = new Object();//Local variables cannot be shared, and optimization lock elimination will be performed during JIT immediate compilation
     synchronized (o) {

Lock coarsening

When using a synchronization lock, you need to keep the scope of the synchronization block as small as possible - only synchronize in the actual scope of shared data. The purpose of this is to minimize the number of operations to be synchronized. If there is lock competition, the line waiting for the lock can get the lock as soon as possible.

In most cases, the above view is correct. However, if a series of continuous locking and unlocking operations may lead to unnecessary performance loss, the concept of lock coarsening is introduced.

The concept of lock vulgarity is easy to understand, which is to connect multiple continuous locking and unlocking operations together and expand them into a larger range of locks

public void vectorTest(){
    Vector<String> vector = new Vector<String>();
    for(int i = 0 ; i < 10 ; i++){
        vector.add(i + "");


Every time a vector is add ed, it needs to be locked. The JVM detects that the continuous locking and unlocking operations on the same object (vector) will merge a wider range of locking and unlocking operations, that is, the locking and unlocking operations will be moved out of the for loop.

Conversion between heavyweight lock, lightweight lock and bias lock

Specific conversion process


obj.wait() causes the thread entering the object monitor to wait in the waitSet

It will release the lock of the object and enter the WaitSet waiting area, so that other threads can get the lock of the object. Wait indefinitely until notify

wait(long n) is a time limited wait that ends in n milliseconds, or is awakened by notifyobj.notify() from one of the waiting threads on the object

obj.notifyAll() wakes up all the waiting threads on the object

They are all means of cooperation between threads and belong to the methods of Object objects. You must obtain a lock on this Object to call these methods

wait/sleep difference

  1. From different classes wait - > object sleep - > thread

  2. wait releases the lock, sleep does not

  3. The use range is different. wait is used anywhere in the synchronization method or the synchronization code block

  4. Threads will enter TIMED_WAITING status

Waiting / notification mechanism

stay synchronized Modified synchronization method or modified synchronization code block Object Class wait(),notify()and notifyAll()3 Two methods for thread communication.
     wait for/Classical paradigm of notification
         The waiting party shall follow the following principles.
         1)Gets the lock of the object.
         2)If the condition is not met, the object is called wait()Method to check the condition after being notified.
         3)If the conditions are met, the corresponding logic is executed.
         while(Conditions not met){
    The notifying party shall follow the following principles.
     1)Get the lock of the object.
     2)Change the conditions.
     3)Notifies all threads waiting on the object.
     Change conditions;
spurious wakeup

When a thread waiting for a condition variable wakes up because the condition variable is triggered, it finds that the waiting condition (shared data) is not satisfied (that is, there is no shared data).

Synchronous mode protective pause

Guarded Suspension is used when one thread waits for the execution result of another thread. Key points:

  • One result needs to be passed from one thread to another to associate them with the same GuardedObject

  • If there are results continuously from one thread to another, you can use message queuing

  • In JDK, this mode is adopted for the implementation of join and Future

  • Because we have to wait for the results of the other party, we are classified into synchronous mode

  • A waiting person must correspond to a result generator: one-to-one correspondence

public class Guarded {

    public static void main(String[] args) {

        GuardedObj guardedSuspension = new GuardedObj();
        new Thread(()->{
            Object list = null;
            try {
                list = guardedSuspension.getResult(3000);
            } catch (InterruptedException e) {
        new Thread(()->{
            String list = Guarded.downLoad();
            try {
            } catch (InterruptedException e) {

    public  static String  downLoad(){
        try {

            HttpClient client = HttpClient.newHttpClient();
            HttpRequest request = HttpRequest.newBuilder(URI.create("")).build();

            HttpResponse.BodyHandler<String> stringBodyHandler = HttpResponse.BodyHandlers.ofString();
            HttpResponse<String> response = client.send(request, stringBodyHandler);

            return response.body();

        } catch (IOException | InterruptedException e) {
        return null;

class  GuardedObj{ //Decoupling class, decoupling result generator and waiting

    private  Object result;

    public  Object getResult(long timeout) throws InterruptedException {
        synchronized (this){

            if (timeout > 0) {

                final long startTime = System.nanoTime();//Nanosecond timer is specially used to test the execution time of code with high accuracy.
                long delay = timeout;
                do {
                } while ((delay = timeout - (System.nanoTime() - startTime)) > 0 && result == null);

            } else if (timeout == 0) {


            } else {
                throw new IllegalArgumentException("timeout value is negative");
           // long begin = System.nanoTime();
            //long passedTime = 0;
//            while (result == null) {
//                long waitTime = timeout - passedTime;//  Waiting time remaining
//                //if (passedTime >= timeout) break;
//                if (waitTime <= 0) break;
//                try {
//                    this.wait(waitTime);
//                } catch (InterruptedException e) {
//                    e.printStackTrace();
//                }
//                passedTime = System.nanoTime() - begin;
//            }

            return result;

    public void  complete(Object result){
        synchronized (this){

            this.result = result;

Asynchronous mode producer consumer

Unlike the GuardObject in the previous protective pause, there is no need for one-to-one correspondence between the threads that generate and consume results

Consumption queues can be used to balance thread resources for production and consumption

The producer is only responsible for generating the result data and does not care about how to deal with the data, while the consumer focuses on dealing with the result data

Message queues have capacity limits. When full, data will not be added, and when empty, data will not be consumed

This mode is adopted for various blocking queues in JDK

public class ProducerAndCustomer {

    public static void main(String[] args) {
        MessageBlockIngQueue queue = new MessageBlockIngQueue(2);
        for (int i = 0; i < 3; i++) {
            int finalI = i;
            new  Thread(()->{
                try {
                    queue.put(new Message(finalI,"produce"+finalI));
                } catch (InterruptedException e) {

        new Thread(() -> {
            while(true) {
                try {
                    Message message = queue.take();
                } catch (InterruptedException e) {

        }, "consumer").start();

//Message intermediate classes can also be implemented using blocking queues, such as ArrayBlockingQueue
class  MessageBlockIngQueue{

    private final LinkedList<Message> linkedList = new LinkedList<>();

    private final int capacity;

    public MessageBlockIngQueue(int capacity) {
        this.capacity = capacity;

    //Save message
    public  void  put(Message message) throws InterruptedException {

        synchronized (linkedList){
            while (linkedList.size() == capacity){
                System.out.println("Queue full, producer waiting");

            System.out.println("Production message"+message);


    //Get message
    public  Message  take() throws InterruptedException {

       synchronized (linkedList){
           while (linkedList.size() ==0){
               System.out.println("The queue is empty and the consumer is waiting");

           Message message = linkedList.removeFirst();

           System.out.println("Get message"+message);
           return  message;


record Message(int id, Object value) {

    public String toString() {
        return "Message{" +
                "id=" + id +
                ", value=" + value +


Park & unpark is a static method of the LockSupport thread communication tool class.

// Pauses the current thread
// Resume a thread
public class TestParkUnpark{
    public static void main(String[] args) {
        Thread t1 = new Thread(() -> {
            try {
            } catch (InterruptedException e) {
        }, "t1");
        try {
        } catch (InterruptedException e) {

Object wait & notify
wait, notify and notifyAll must be used together with Object Monitor. park and unpark do not have to be used. park & unpark blocks and wakes up threads by thread. Notify can wake up only one waiting thread randomly. NotifyAll wakes up all waiting threads, which is not so accurate
Park & unpark can unpark first, while wait & notify cannot notify first


Each thread has its own Parker object, which is composed of three parts_ counter , _ cond and_ mutex

unpark after park is called

The procedure of calling upark first and then park

Tags: Java JUC lock synchronized

Posted on Sun, 03 Oct 2021 13:47:58 -0400 by nonexistentera