1: The difference between synchronized and ReentrantLock
1: Different meanings
Synchronized is a keyword and belongs to the JVM level. The bottom layer is completed through monitorenter and monitorexit, which depends on the monitor object;
Lock is under the java.util.concurrent.locks.lock package and is a new API level lock introduced after JDK1.5;
2: Different use methods
Synchronized does not require the user to manually release the lock. After the code is completed, the system automatically allows the thread to release the lock; ReentrantLock requires the user to release the lock manually. Failure to release the lock manually may lead to deadlock;
3: Can wait be interrupted
Synchronized cannot be interrupted unless an exception is thrown or normal operation is completed; ReentrantLock can be interrupted. One is to interrupt by calling interrupt () through tryLock (long timeout, TimeUnit unit) and lockInterruptibly () block.
4: Is locking fair
Synchronized is a non fair lock; ReentrantLock defaults to a non fair lock. You can pass in a boolean value in the construction method. true represents a fair lock and false represents a non fair lock;
2: Classification of locks
3: Optimistic lock and pessimistic lock
1: Pessimistic lock
Pessimistic locks: synchronized and lock interfaces
It is suitable for the case of many concurrent writes and the case of long lock holding time in the critical area. Pessimistic lock can avoid a large amount of useless spin consumption. Typical situation:
① Critical zone has IO operation
② The code of critical area is complex or the number of cycles is large
③ The competition in the critical area is very fierce
2: Optimistic lock
Optimistic locking: typical examples are atomic classes, concurrency containers, etc
It is suitable for scenarios where there are few concurrent writes and most of them are reads. Reading without locking can be greatly improved.
4: Reentrant and non reentrant locks
Reentrant lock means that after a thread obtains the lock, the thread can continue to obtain the lock. The underlying principle maintains a counter. When a thread obtains the lock, the counter increases by one, and continues to increase by one when it obtains the lock again. When the lock is released, the counter decreases by one. When the counter value is 0, it indicates that the lock is not held by any thread, and other threads can compete to obtain the lock.
lock/reentrantlock/GetHoldCount.java
package lock.reentrantlock; import java.util.concurrent.locks.ReentrantLock; public class GetHoldCount { private static ReentrantLock lock = new ReentrantLock(); public static void main(String[] args) { System.out.println(lock.getHoldCount()); lock.lock(); System.out.println(lock.getHoldCount()); lock.lock(); System.out.println(lock.getHoldCount()); lock.lock(); System.out.println(lock.getHoldCount()); lock.unlock(); System.out.println(lock.getHoldCount()); lock.unlock(); System.out.println(lock.getHoldCount()); lock.unlock(); System.out.println(lock.getHoldCount()); } }
Use of reentrant locks in recursive calls
lock/reentrantlock/RecursionDemo.java
package lock.reentrantlock; import java.util.concurrent.locks.ReentrantLock; public class RecursionDemo { private static ReentrantLock lock = new ReentrantLock(); private static void accessResource() { lock.lock(); try { System.out.println("The resource has been processed"); if (lock.getHoldCount()<5) { System.out.println(lock.getHoldCount()); accessResource(); System.out.println(lock.getHoldCount()); } } finally { lock.unlock(); } } public static void main(String[] args) { accessResource(); } }
5: Fair lock and unfair lock
1: Overview
Fairness means that locks are allocated in the order requested by threads
Unfairness means that you can jump the queue under certain circumstances, not completely in the order of requests.
2: Why are unfair locks designed
In order to improve efficiency and avoid the gap period caused by wake-up
For example, queuing to buy train tickets: in the early days, I needed to queue up all night to buy tickets. If I was in the second position, when I started selling tickets, the person in the first position came to me after buying tickets, but because I waited all night, I was a little unconscious and in a daze (the thread woke up from the blocking state). At this time, the first person suddenly came back to ask the departure time, At this time, the first person can jump in the queue, because his jump in the queue does not affect me to buy tickets, and I am still slowly waking up.
When creating the ReentrantLock object, if the parameter is filled in as true, then this is a fair lock.
lock/reentrantlock/FairLock.java
package lock.reentrantlock; import java.util.Random; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; /** * Description: demonstrate both fair and unfair situations */ public class FairLock { public static void main(String[] args) { PrintQueue printQueue = new PrintQueue(); Thread thread[] = new Thread[10]; //10 threads to print for (int i = 0; i < 10; i++) { thread[i] = new Thread(new Job(printQueue)); } //Start 10 threads for (int i = 0; i < 10; i++) { thread[i].start(); try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } } } } class Job implements Runnable { PrintQueue printQueue; public Job(PrintQueue printQueue) { this.printQueue = printQueue; } @Override public void run() { System.out.println(Thread.currentThread().getName() + "Start printing"); printQueue.printJob(new Object()); System.out.println(Thread.currentThread().getName() + "Print complete"); } } class PrintQueue { private Lock queueLock = new ReentrantLock(false); //Print two copies of each document. After the first printing, the simulation thread cuts in line to print the second time public void printJob(Object document) { queueLock.lock(); try { int duration = new Random().nextInt(10) + 1; System.out.println(Thread.currentThread().getName() + "Printing, need" + duration+"second"); Thread.sleep(duration * 1000); } catch (InterruptedException e) { e.printStackTrace(); } finally { queueLock.unlock(); } queueLock.lock(); try { int duration = new Random().nextInt(10) + 1; System.out.println(Thread.currentThread().getName() + "Printing, need" + duration+"second"); Thread.sleep(duration * 1000); } catch (InterruptedException e) { e.printStackTrace(); } finally { queueLock.unlock(); } } }
6: Spin lock and blocking lock
Blocking or waking up a Java thread requires the operating system to switch the CPU state, which takes processor time.
If the content in the synchronization code block is simple, the state transition may take longer than the execution time of user code.
When two or more threads execute in parallel at the same time, the thread requesting the lock can not give up the CPU execution time to see whether the thread holding the lock will release the lock soon. In order to make the current thread wait, the current thread needs to spin. If the thread that locks the synchronization resources releases the lock after the spin is completed, the current thread can directly obtain the synchronization resources without blocking, so as to avoid the overhead of switching threads.
1: Implementation principle of spin lock
It is the CAS used by the lock method. When the first thread A obtains the lock, it can successfully obtain it and will not enter the while loop. If thread A does not release the lock at this time and another thread B obtains the lock, it will enter the while loop because it does not meet the CAS, and constantly judge whether the CAS is met until thread A calls the unlock method to release the lock.
2: Problems of spin lock
If a thread holds a lock for a long time, it will cause other threads waiting to obtain the lock to enter a circular wait and consume CPU. Improper use will result in extremely high CPU utilization.
The spin lock implemented in Java above is not fair, that is, the thread with the longest waiting time cannot obtain the lock first. Unfair locks will cause the problem of "thread hunger".
7: Lock optimization
1: Reduce lock holding time
2: Reduce lock granularity
Split large objects into small objects, increase parallelism and reduce lock competition.
Concurrent HashMap allows multiple threads to enter at the same time
3: Lock separation
Lock separation according to function
ReadWriteLock can improve performance when reading more and writing less.
4: Lock elimination
Lock elimination is a lock optimization method that occurs at the compiler level.
Sometimes the code we write does not need to be locked at all, but performs the locking operation.
5: Lock coarsening
Generally, in order to ensure effective concurrency among multiple threads, each thread is required to hold the lock as short as possible. However, in some cases, a program will consume certain system resources by continuously and frequently requesting, synchronizing and releasing the same lock, because the request, synchronization and release of the lock itself will cause performance loss, In this way, high-frequency lock requests are not conducive to the optimization of system performance, although the time of single synchronization operation may be very short. Lock coarsening tells us that everything has a degree. In some cases, we want to combine many lock requests into one request to reduce the performance loss caused by a large number of lock requests, synchronization and release in a short time.