The solution of lock in Java

1. Optimistic lock and pessimistic lock

Optimistic locks and pessimistic locks should be the first two types of locks that every developer contacts. Application scenarios are mainly used to update data, which is also one of the most important scenarios for using locks. The main process of updating data is as follows:

  1. Retrieve the data to be updated for the operator to view;
  2. The operator changes the value to be modified
  3. Click Save to update the data

This process seems very simple, but we use multi-threaded thinking to consider, which should also be considered as an Internet thinking, and we will find the hidden problems. Let's take a look at it

  1. A retrieve data
  2. B retrieve data
  3. B data modified
  4. A will the system succeed in modifying data?

1: Optimistic lock

Of course, the success of A modification depends on how the program is written. Let's put aside the program and consider from the common sense that when A saves the data, the system will give A prompt that the data you want to modify has been modified by others, please re query and confirm. So how can we implement it in our program?

  1. When retrieving the data, the version number or the last update time of the data shall be queried together
  2. After the operator changes the data, click Save to perform the update operation in the database
  3. When performing the update operation, compare the version number or last update time found in step 1 with the records in the database
  4. If the version number or the last update time are the same, you can update
  5. If not, give the above tips
update xx set number = 10 , revision = #{revision} + 1  where id = #{id} and revision = #{revision}

The above process is the implementation of optimistic lock. In JAVA, optimistic lock has no definite method or keyword, it is just a process and strategy. Let's take a look at optimistic locks in JAVA after we understand the above example.

Optimistic lock, which assumes that one thread will not be changed by other threads when fetching data, as in the above example, but will check whether the data has been modified when updating data. It is a Compare And Swap mechanism. It's not very familiar to confuse with CAP (Consistency Availability Partition tolerance) theorem. Once the CAS mechanism detects that there is a conflict, that is, the version number mentioned above or the last update time is inconsistent, it will retry until there is no conflict.

The mechanism of optimistic lock is shown as follows:

Let's take a look at the most commonly used i + + in JAVA. Let's think about a question. What is the execution order of I + +? Is it thread safe? Is there a problem when multiple threads execute I + + concurrently? Let's take a look at the procedure

package com.bfxy.esjob;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

/**
 * @Author: qiuj
 * @Description:    Optimistic lock
 * @Date: 2020-06-27 13:43
 */
public class OptimisticLocking {

    private int i = 0;

    public static void main(String[] args) throws InterruptedException {
        new OptimisticLocking().notOptimisticLocking();
    }


    public void notOptimisticLocking () throws InterruptedException {
        OptimisticLocking optimisticLocking = new OptimisticLocking();
        ExecutorService executorService = Executors.newFixedThreadPool(50);
        //  Purpose wait for 5000 tasks to finish executing the output statement of the main thread
        CountDownLatch countDownLatch = new CountDownLatch(5000);
        for (int i = 0; i < 5000; i++) {
            executorService.execute(() -> {
                optimisticLocking.i++;
                //  5000 counter minus 1
                countDownLatch.countDown();
            });
        }
        //  Close thread pool after executing task
        executorService.shutdown();
        //  After 5000 tasks are executed, release the main thread to execute the output statement
        countDownLatch.await();
        System.out.println("After execution, i=" + optimisticLocking.i);
        /*
        i++ Not atomic thread unsafe
        1: Take the current value e.g. 2000
        2: Change it to 2001, but at this time, it's not just this thread that is performing the same steps. There is concurrency. So it's possible that the values are overwritten repeatedly
         */
    }
}

In the above program, we simulated 50 threads executing i + + at the same time, 5000 times in total. According to the general understanding, the result should be 5000. Let's run the program to see how the result is?

After execution, i=4993

After execution, i=4996

After execution, i=4988

This is the result after we run three times. You can see that the result of each execution is different, and not 5000. Why? This shows that i + + is not an atomic operation and is not safe in the case of multithreading. Let's break down the detailed steps of i + +:

  1. Fetch the current value of i from memory
  2. Add 1 to the value of i
  3. Put the calculated value into memory

This process is the same as the database operation process we explained above. In the multi-threaded scenario, we can imagine that thread A and thread B take the value of i from memory at the same time. If the value of i is 1000, then thread A and thread B perform + 1 operation at the same time, and then put the value into memory. At this time, the value in memory is 1001, and what we expect is 1002. This is the reason that leads to the above error. So how can we solve it? After Java 1.5, JDK officially provides A large number of atomic classes, which are all based on CAS mechanism, that is, optimistic locks are used. Let's slightly modify the above procedure as follows:

package com.bfxy.esjob;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicInteger;

/**
 * @Author: qiuj
 * @Description:    Optimistic lock
 * @Date: 2020-06-27 13:43
 */
public class OptimisticLocking {

    private int i = 0;

    private AtomicInteger atomicInteger = new AtomicInteger(0);

    public static void main(String[] args) throws InterruptedException {
//        new OptimisticLocking().notOptimisticLocking();
        new OptimisticLocking().optimisticLocking();
    }


    public void notOptimisticLocking () throws InterruptedException {
        OptimisticLocking optimisticLocking = new OptimisticLocking();
        ExecutorService executorService = Executors.newFixedThreadPool(50);
        //  Purpose wait for 5000 tasks to finish executing the output statement of the main thread
        CountDownLatch countDownLatch = new CountDownLatch(5000);
        for (int i = 0; i < 5000; i++) {
            executorService.execute(() -> {
                optimisticLocking.i++;
                //  5000 counter minus 1
                countDownLatch.countDown();
            });
        }
        //  Close thread pool after executing task
        executorService.shutdown();
        //  After 5000 tasks are executed, release the main thread to execute the output statement
        countDownLatch.await();
        System.out.println("After execution, i=" + optimisticLocking.i);
        /*
        i++ Not atomic thread unsafe
        1: Take the current value e.g. 2000
        2: Change it to 2001, but at this time, it's not just this thread that is performing the same steps. There is concurrency. So it's possible that the values are overwritten repeatedly
         */
    }

    public void optimisticLocking () throws InterruptedException {
        OptimisticLocking optimisticLocking = new OptimisticLocking();
        ExecutorService executorService = Executors.newFixedThreadPool(50);
        CountDownLatch countDownLatch = new CountDownLatch(5000);
        for (int i = 0; i < 5000; i++) {
            executorService.execute(() -> {
                optimisticLocking.atomicInteger.incrementAndGet();
                countDownLatch.countDown();
            });
        }
        executorService.shutdown();
        countDownLatch.await();
        System.out.println("After the execution is completed, i=" + optimisticLocking.atomicInteger);
    }
}

We change the type of variable I to AtomicInteger, which is an atomic class. We changed the place where we called i++ to i.incrementAndGet(), and incrementAndGet() method adopted CAS mechanism, that is to say, optimistic lock was used. Let's run the program to see the result

After execution, i=5000

After execution, i=5000

After execution, i=5000

We also performed three times, and the result of three times was 5000, which met our expectation. This is the optimistic lock. We summarize the optimistic lock slightly. The optimistic lock does not make any restrictions when reading data. Instead, when updating data, we compare the data to ensure that the version of the data is consistent before updating the data. According to this feature, we can see that optimistic lock is suitable for scenarios with more read operations and less write operations.

2: Pessimistic lock

Pessimistic lock is the opposite of optimistic lock. The pessimistic lock is locked when reading data until the data update is completed and the lock is released. During this period, only one thread can operate and other threads can wait. In JAVA, pessimistic locks can be implemented using the synchronized keyword or the ReentrantLock class. In the above example, we use these two methods to implement it. First, use the synchronized keyword to implement:

package com.bfxy.esjob;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

/**
 * @Author: qiuj
 * @Description:    Pessimistic lock
 * @Date: 2020-06-27 15:08
 */
public class PessimisticLocking {
    private Integer i = 0;

    public static void main(String[] args) throws InterruptedException {
        new PessimisticLocking().synchronizedPessimisticLocking();
    }

    public void synchronizedPessimisticLocking () throws InterruptedException {
        PessimisticLocking pessimisticLocking = new PessimisticLocking();
        ExecutorService executorService = Executors.newFixedThreadPool(50);
        CountDownLatch countDownLatch = new CountDownLatch(5000);
        for (int i = 0; i < 5000; i++) {
            executorService.execute(() -> {
                synchronized (pessimisticLocking) {
                    pessimisticLocking.i++;
                }
                countDownLatch.countDown();
            });
        }
        executorService.shutdown();
        countDownLatch.await();
        System.out.println("After execution, i=" + pessimisticLocking.i);
    }

}

The only change we have is to add a synchronized block. The object it locks is test. In all threads, whoever obtains the lock of the test object can execute the i + + operation. We use a synchronized pessimistic lock to make the i + + thread safe. Let's run it and see what happens

After execution, i=5000

After execution, i=5000

After execution, i=5000

We run it three times, and the result is 5000, which is in line with the expectation. Next, we use ReentrantLock class to implement pessimistic lock. The code is as follows:

package com.bfxy.esjob;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

/**
 * @Author: qiuj
 * @Description:    Pessimistic lock
 * @Date: 2020-06-27 15:08
 */
public class PessimisticLocking {
    private Integer i = 0;
    private Lock lock = new ReentrantLock();

    public static void main(String[] args) throws InterruptedException {
//        new PessimisticLocking().synchronizedPessimisticLocking();
        new PessimisticLocking().reentrantLockPessimisticLocking();
    }

    public void synchronizedPessimisticLocking () throws InterruptedException {
        PessimisticLocking pessimisticLocking = new PessimisticLocking();
        ExecutorService executorService = Executors.newFixedThreadPool(50);
        CountDownLatch countDownLatch = new CountDownLatch(5000);
        for (int i = 0; i < 5000; i++) {
            executorService.execute(() -> {
                synchronized (pessimisticLocking) {
                    pessimisticLocking.i++;
                }
                countDownLatch.countDown();
            });
        }
        executorService.shutdown();
        countDownLatch.await();
        System.out.println("After the execution is completed, i=" + pessimisticLocking.i);
    }

    public void reentrantLockPessimisticLocking () throws InterruptedException {
        PessimisticLocking pessimisticLocking = new PessimisticLocking();
        ExecutorService executorService = Executors.newFixedThreadPool(50);
        CountDownLatch countDownLatch = new CountDownLatch(5000);
        for (int i = 0; i < 5000; i++) {
            executorService.execute(() -> {
                pessimisticLocking.lock.lock();
                pessimisticLocking.i++;
                pessimisticLocking.lock.unlock();
                countDownLatch.countDown();
            });
        }
        executorService.shutdown();
        countDownLatch.await();
        System.out.println("After the execution is completed, i=" + pessimisticLocking.i);
    }

}

We added Lock lock = new ReentrantLock(); and added it before i + + lock.lock() lock operation, added after i + + lock.unlock() operation to release lock. We also run it three times to see the results.

After execution, i=5000

After execution, i=5000

After execution, i=5000

The results of the three operations are all 5000, which is fully in line with the expectation. Let's summarize pessimistic lock. Pessimistic lock adds lock when reading data, and when updating data, only one thread can perform update operation, instead of comparing data versions like optimistic lock. So pessimistic lock is suitable for the operation of reading relatively little and writing relatively much.

2. Fair lock and unfair lock

In the previous section, we introduced optimistic lock and pessimistic lock. In this section, we will discuss fair lock and unfair lock from another dimension. It's not hard to see from the name that fair lock is fair to treat every thread in the case of multithreading, while non fair lock is just the opposite. It's still a little obscure in the literal sense. Let's give an example to illustrate that the scene is still the example of shopping in the supermarket and storing things in the locker. There is only one locker, and three people come to use the locker at the same time. At this time, a grabs the locker first, a goes to use it, and B and C consciously queue up. A after use, the first person in the queue behind will continue to use the cabinet, which is the fair lock. In the fair lock, all threads consciously queue up. After a thread finishes executing, the threads in the next row continue to use.

The unfair lock is not the case. When A uses the cabinet, B and C will not queue up. After A finishes using the cabinet, throw the key to the back. B and C will use the key as soon as they get it, or even A D will pop up suddenly. If D gets the key, then D will use the cabinet. This is the unfair lock.

Fair lock and unfair lock are implemented in ReentrantLock class. Let's see the source code of ReentrantLock

    /**
     * Creates an instance of {@code ReentrantLock}.
     * This is equivalent to using {@code ReentrantLock(false)}.
     */
    public ReentrantLock() {
        sync = new NonfairSync();
    }

    /**
     * Creates an instance of {@code ReentrantLock} with the
     * given fairness policy.
     *
     * @param fair {@code true} if this lock should use a fair ordering policy
     */
    public ReentrantLock(boolean fair) {
        sync = fair ? new FairSync() : new NonfairSync();
    }

ReentrantLock has two construction methods. In the default construction method, sync = new Nonfairsync(); we can see that it is an unfair lock literally. Take a look at the second construction method. It needs to pass in a parameter. The parameter is a boolean type, true is a fair lock, and false is an unfair lock. From the above source code, we can see that sync has two implementation classes, FairSync and NonfairSync. Let's take a look at the core method of obtaining locks. First, the source code of FairSync

    /**
     * Sync object for fair locks
     */
    static final class FairSync extends Sync {
        private static final long serialVersionUID = -3000897897090466540L;

        final void lock() {
            acquire(1);
        }

        /**
         * Fair version of tryAcquire.  Don't grant access unless
         * recursive call or no waiters or is first.
         */
        protected final boolean tryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                if (!hasQueuedPredecessors() &&
                    compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0)
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }
    }

Then there is the source code of non fair lock

        /**
         * Performs non-fair tryLock.  tryAcquire is implemented in
         * subclasses, but both need nonfair try for trylock method.
         */
        final boolean nonfairTryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                if (compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }

By comparing the two methods, we can see that the only difference is! hasQueuedPredecessors(). Obviously, this method is a queue. From this, we can infer that the fair lock is to put all threads in one queue. After one thread finishes executing, the next thread is taken out of the queue, while the non Fair lock does not have this queue. These are the underlying implementation principles of fair lock and unfair lock. When we use them, we don't need to go to such a deep level of code. We just need to understand the meaning of fair lock and unfair lock. When we call the construction method, we can pass in true or false.

1: Fair lock

The fair lock is shown in the figure:

Multiple threads execute the method at the same time. Thread A grabs the lock, and A can execute the method. Other threads queue in the queue. After A finishes executing the method, it will take the next thread B out of the queue and execute the method. And so on. It's Fair for every thread. There's no saying that the thread added later will execute first.

 

2: Unfair lock

The unfair lock is shown in the following figure:

Multiple threads execute the method at the same time. Thread A grabs the lock, and A can execute the method. But other threads do not queue. After A finishes executing the method, when the lock is released, the other threads who get the lock will execute the method. There will be A situation that the thread added later will preempt the lock before the thread added earlier.

3. : summary

There are many kinds of locks in JAVA. I found some typical lock types and introduced them. Optimistic lock and pessimistic lock are the most basic and must be mastered. It is inevitable that we should use optimistic lock and pessimistic lock in our work. From the perspective of fair lock and unfair lock, people usually use unfair lock, which is also the default lock type. If you want to use a fair lock, you can use it in the scenario of seconds killing. In the scenario of seconds killing, you need to follow the principle of first come first served, and you need to queue. Therefore, this scenario is most suitable for using a fair lock.

Tags: Java Database JDK less

Posted on Sat, 27 Jun 2020 04:21:02 -0400 by phpbaby2009