JUC study Chapter 8 J.U.C reentrant principle

Reentrant principle

static final class NonfairSync extends Sync {
    // ...

    // Sync inherited method, easy to read, put here
    final boolean nonfairTryAcquire(int acquires) {
        final Thread current = Thread.currentThread();
        int c = getState();
        if (c == 0) {
            if (compareAndSetState(0, acquires)) {
                setExclusiveOwnerThread(current);
                return true;
            }
        }
        // If the lock has been obtained and the thread is still the current thread, it indicates that lock reentry has occurred
        else if (current == getExclusiveOwnerThread()) {
            // state++
            int nextc = c + acquires;
            if (nextc < 0) // overflow
                throw new Error("Maximum lock count exceeded");
            setState(nextc);
            return true;
        }
        return false;
    }

    // Sync inherited method, easy to read, put here
    protected final boolean tryRelease(int releases) {
        // state--
        int c = getState() - releases;
        if (Thread.currentThread() != getExclusiveOwnerThread())
            throw new IllegalMonitorStateException();
        boolean free = false;
        // Lock reentry is supported. It can be released successfully only when the state is reduced to 0
        if (c == 0) {
            free = true;
            setExclusiveOwnerThread(null);
        }
        setState(c);
        return free;
    }
}

        When the lock is re entered, the state will increase automatically. Let the state decrease automatically when unlocking, and release the lock when it reaches 0

Interruptible principle

Non interruptible mode: in this mode, even if it is interrupted, it will still stay in the AQS queue until it obtains the lock

// Sync inherited from AQS
static final class NonfairSync extends Sync {
    // ...

    private final boolean parkAndCheckInterrupt() {
        // If the break flag is already true, the park will be invalidated
        LockSupport.park(this);
        // interrupted clears the break flag
        return Thread.interrupted();
    }

    final boolean acquireQueued(final Node node, int arg) {
        boolean failed = true;
        try {
            boolean interrupted = false;
            for (;;) {
                final Node p = node.predecessor();
                if (p == head && tryAcquire(arg)) {
                    setHead(node);
                    p.next = null;
                    failed = false;
                    // You still need to obtain the lock before you can return to the broken state
                    return interrupted;
                }
                if (
                        shouldParkAfterFailedAcquire(p, node) &&
                                parkAndCheckInterrupt()
                ) {
                    // If the interrupt is awakened, the interrupt status returned is true
                    interrupted = true;
                }
            }
        } finally {
            if (failed)
                cancelAcquire(node);
        }
    }

    public final void acquire(int arg) {
        if (
                !tryAcquire(arg) &&
                        acquireQueued(addWaiter(Node.EXCLUSIVE), arg)
        ) {
            // If the interrupt status is true
            selfInterrupt();
        }
    }

    static void selfInterrupt() {
        // Re generate an interrupt. If the thread is running normally, the interrupt method will not report an error if it is not in sleep or other states
        Thread.currentThread().interrupt();
    }
}
}

Interruptible mode

static final class NonfairSync extends Sync {
    public final void acquireInterruptibly(int arg) throws InterruptedException {
        if (Thread.interrupted())
            throw new InterruptedException();
        // If the lock is not obtained, enter one
        if (!tryAcquire(arg))
            doAcquireInterruptibly(arg);
    }

    // A interruptible lock acquisition process
    private void doAcquireInterruptibly(int arg) throws InterruptedException {
        final Node node = addWaiter(Node.EXCLUSIVE);
        boolean failed = true;
        try {
            for (;;) {
                final Node p = node.predecessor();
                if (p == head && tryAcquire(arg)) {
                    setHead(node);
                    p.next = null; // help GC
                    failed = false;
                    return;
                }
                if (shouldParkAfterFailedAcquire(p, node) &&
                        parkAndCheckInterrupt()) {
                    // In the process of park, if it is interrupt ed, it will enter this field
                    // At this time, an exception is thrown without entering for (;) again
                    throw new InterruptedException();
                }
            }
        } finally {
            if (failed)
                cancelAcquire(node);
        }
    }
}

Implementation principle of fair lock

static final class FairSync extends Sync {
    private static final long serialVersionUID = -3000897897090466540L;
    final void lock() {
        acquire(1);
    }

    // The method inherited from AQS is easy to read and placed here
    public final void acquire(int arg) {
        if (
                !tryAcquire(arg) &&
                        acquireQueued(addWaiter(Node.EXCLUSIVE), arg)
        ) {
            selfInterrupt();
        }
    }
    // The main difference from non fair lock is the implementation of tryAcquire method
    protected final boolean tryAcquire(int acquires) {
        final Thread current = Thread.currentThread();
        int c = getState();
        if (c == 0) {
            // First check whether there are precursor nodes in the AQS queue. If not, compete
            if (!hasQueuedPredecessors() &&
                    compareAndSetState(0, acquires)) {
                setExclusiveOwnerThread(current);
                return true;
            }
        }
        else if (current == getExclusiveOwnerThread()) {
            int nextc = c + acquires;
            if (nextc < 0)
                throw new Error("Maximum lock count exceeded");
            setState(nextc);
            return true;
        }
        return false;
    }

    // A method inherited from AQS, which is easy to read, is placed here
    public final boolean hasQueuedPredecessors() {
        Node t = tail;
        Node h = head;
        Node s;
        // h != t indicates that there are nodes in the queue
        return h != t &&
                (
                        // (s = h.next) == null indicates whether there is a dick in the queue
                        (s = h.next) == null || // Or the second thread in the queue is not this thread
                                s.thread != Thread.currentThread()
                );
    }
}

Implementation principle of conditional variables

Graphic flow

Each condition variable actually corresponds to a waiting queue, and its implementation class is ConditionObject

The await process starts, Thread-0 holds the lock, calls await, enters the addConditionWaiter process of ConditionObject, creates a new Node with the status of - 2 (Node.CONDITION), associates Thread-0, and adds it to the tail of the waiting queue

Next, enter the fully release process of AQS to release the lock on the synchronizer

The next node in the unpark AQS queue competes for the lock. Assuming that there are no other competing threads, the Thread-1 competition succeeds

park blocking Thread-0

signal process

Suppose Thread-1 wants to wake up Thread-0

Enter the doSignal process of ConditionObject and obtain the first Node in the waiting queue, that is, the Node where Thread-0 is located

Execute the transferForSignal process, add the Node to the end of the AQS queue, change the waitStatus of Thread-0 to 0, and the waitStatus of Thread-3 to - 1

Thread-1 releases the lock and enters the unlock process

Source code:

public class ConditionObject implements Condition, java.io.Serializable {
    private static final long serialVersionUID = 1173984872572414699L;

    // First waiting node
    private transient Node firstWaiter;

    // Last waiting node
    private transient Node lastWaiter;
    public ConditionObject() { }
    // Add a Node to the waiting queue
    private Node addConditionWaiter() {
        Node t = lastWaiter;
        // All cancelled nodes are deleted from the queue linked list, as shown in Figure 2
        if (t != null && t.waitStatus != Node.CONDITION) {
            unlinkCancelledWaiters();
            t = lastWaiter;
        }
        // Create a new Node associated with the current thread and add it to the end of the queue
        Node node = new Node(Thread.currentThread(), Node.CONDITION);
        if (t == null)
            firstWaiter = node;
        else
            t.nextWaiter = node;
        lastWaiter = node;
        return node;
    }
    // Wake up - transfer the first node not cancelled to the AQS queue
    private void doSignal(Node first) {
        do {
            // It's already the tail node
            if ( (firstWaiter = first.nextWaiter) == null) {
                lastWaiter = null;
            }
            first.nextWaiter = null;
        } while (
            // Transfer the nodes in the waiting queue to the AQS queue. If it is unsuccessful and there are still nodes, continue to cycle for three times
                !transferForSignal(first) &&
                        // Queue and node
                        (first = firstWaiter) != null
        );
    }

    // External class methods are easy to read and placed here
    // 3. If the node status is cancel, return false to indicate that the transfer failed, otherwise the transfer succeeded
    final boolean transferForSignal(Node node) {
        // Set the current node status to 0 (because it is at the end of the queue). If the status is no longer Node.CONDITION, it indicates that it has been cancelled
        if (!compareAndSetWaitStatus(node, Node.CONDITION, 0))
            return false;
        // Join the end of AQS queue
        Node p = enq(node);
        int ws = p.waitStatus;
        if (
            // The last node inserted into the node is cancelled
                ws > 0 ||
                        // The last node inserted into a node cannot be set to Node.SIGNAL
                        !compareAndSetWaitStatus(p, ws, Node.SIGNAL)
        ) {
            // unpark unblocks the thread to resynchronize the state
            LockSupport.unpark(node.thread);
        }
        return true;
    }
// Wake up all - wait for all nodes in the queue to transfer to the AQS queue
private void doSignalAll(Node first) {
    lastWaiter = firstWaiter = null;
    do {
        Node next = first.nextWaiter;
        first.nextWaiter = null;
        transferForSignal(first);
        first = next;
    } while (first != null);
}

    // ㈡
    private void unlinkCancelledWaiters() {
        // ...
    }
    // Wake up - you must hold a lock to wake up, so there is no need to consider locking in doSignal
    public final void signal() {
        // If the lock is not held, an exception is thrown
        if (!isHeldExclusively())
            throw new IllegalMonitorStateException();
        Node first = firstWaiter;
        if (first != null)
            doSignal(first);
    }
    // Wake up all - you must hold a lock to wake up, so there is no need to consider locking in dosignallall
    public final void signalAll() {
        if (!isHeldExclusively())
            throw new IllegalMonitorStateException();
        Node first = firstWaiter;
        if (first != null)
            doSignalAll(first);
    }
    // Non interruptible wait - until awakened
    public final void awaitUninterruptibly() {
        // Add a Node to the waiting queue, as shown in Figure 1
        Node node = addConditionWaiter();
        // Release the lock held by the node, see Figure 4
        int savedState = fullyRelease(node);
        boolean interrupted = false;
        // If the node has not been transferred to the AQS queue, it will be blocked
        while (!isOnSyncQueue(node)) {
            // park blocking
            LockSupport.park(this);
            // If it is interrupted, only the interruption status is set
            if (Thread.interrupted())
                interrupted = true;
        }
        // After waking up, try to compete for the lock. If it fails, enter the AQS queue
        if (acquireQueued(node, savedState) || interrupted)
            selfInterrupt();
    }
    // External class methods are easy to read and placed here
    // Fourth, because a thread may re-enter, you need to release all the States, obtain the States, and then subtract them all to release them all
    final int fullyRelease(Node node) {
        boolean failed = true;
        try {
            int savedState = getState();
            // Wake up the next node in the waiting queue
            if (release(savedState)) {
                failed = false;
                return savedState;
            } else {
                throw new IllegalMonitorStateException();
            }
        } finally {
            if (failed)
                node.waitStatus = Node.CANCELLED;
        }
    }
    // Interrupt mode - resets the interrupt state when exiting the wait
    private static final int REINTERRUPT = 1;
    // Break mode - throw an exception when exiting the wait
    private static final int THROW_IE = -1;
    // Judge interrupt mode
    private int checkInterruptWhileWaiting(Node node) {
        return Thread.interrupted() ?
                (transferAfterCancelledWait(node) ? THROW_IE : REINTERRUPT) :
                0;
    }
    // V. application interrupt mode
    private void reportInterruptAfterWait(int interruptMode)
            throws InterruptedException {
        if (interruptMode == THROW_IE)
            throw new InterruptedException();
        else if (interruptMode == REINTERRUPT)
            selfInterrupt();
    }
    // Wait until awakened or interrupted
    public final void await() throws InterruptedException {
        if (Thread.interrupted()) {
            throw new InterruptedException();
        }
        // Add a Node to the waiting queue, as shown in Figure 1
        Node node = addConditionWaiter();
        // Release the lock held by the node
        int savedState = fullyRelease(node);
        int interruptMode = 0;
        // If the node has not been transferred to the AQS queue, it will be blocked
        while (!isOnSyncQueue(node)) {
            // park blocking              
            LockSupport.park(this);
            // If interrupted, exit the waiting queue
            if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
                break;
        }
        // After exiting the waiting queue, you also need to obtain the lock of the AQS queue
        if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
            interruptMode = REINTERRUPT;
        // All cancelled nodes are deleted from the queue linked list, as shown in Figure 2
        if (node.nextWaiter != null)
            unlinkCancelledWaiters();
        // Apply interrupt mode, see v
        if (interruptMode != 0)
            reportInterruptAfterWait(interruptMode);
    }
    // Wait - until awakened or interrupted or timed out
    public final long awaitNanos(long nanosTimeout) throws InterruptedException {
        if (Thread.interrupted()) {
            throw new InterruptedException();
        }
        // Add a Node to the waiting queue, as shown in Figure 1
        Node node = addConditionWaiter();
        // Release the lock held by the node
        int savedState = fullyRelease(node);
        // Get deadline
        final long deadline = System.nanoTime() + nanosTimeout;
        int interruptMode = 0;
        // If the node has not been transferred to the AQS queue, it will be blocked
        while (!isOnSyncQueue(node)) {
            // Timed out, exiting the waiting queue
            if (nanosTimeout <= 0L) {
                transferAfterCancelledWait(node);
                break;
            }
            // park blocks for a certain time, and spinForTimeoutThreshold is 1000 ns
            if (nanosTimeout >= spinForTimeoutThreshold)
                LockSupport.parkNanos(this, nanosTimeout);
            // If interrupted, exit the waiting queue
            if ((interruptMode = checkInterruptWhileWaiting(node)) != 0)
                break;
            nanosTimeout = deadline - System.nanoTime();
        }
        // After exiting the waiting queue, you also need to obtain the lock of the AQS queue
        if (acquireQueued(node, savedState) && interruptMode != THROW_IE)
            interruptMode = REINTERRUPT;
        // All cancelled nodes are deleted from the queue linked list, as shown in Figure 2
        if (node.nextWaiter != null)
            unlinkCancelledWaiters();
        // Apply interrupt mode, see v
        if (interruptMode != 0)
            reportInterruptAfterWait(interruptMode);
        return deadline - System.nanoTime();
    }
    // Wait - until awakened or interrupted or timed out, the logic is similar to awaitNanos
    public final boolean awaitUntil(Date deadline) throws InterruptedException {
        // ...
    }
    // Wait - until awakened or interrupted or timed out, the logic is similar to awaitNanos
    public final boolean await(long time, TimeUnit unit) throws InterruptedException {
        // ...
    }
    // Tool method omitted
}

Read write lock

ReentrantReadWriteLock

When the read operation is much higher than the write operation, the read-write lock is used to make the read-read concurrent and improve the performance. Read write and write write are mutually exclusive!

A data container class is provided, which uses the read() method for reading lock protection data and the write() method for writing lock protection data

public class TestReadAndWrite {
    public static void main(String[] args) {
        DataContainer container = new DataContainer();
        new Thread(container::read,"t1").start();
        new Thread(container::read,"t2").start();
    }
}
@Slf4j(topic = "c.DataContainer")
class DataContainer{
    private Object data;

    //Create read / write lock
    private ReentrantReadWriteLock rw = new ReentrantReadWriteLock();
    private ReentrantReadWriteLock.ReadLock readLock = rw.readLock();
    private ReentrantReadWriteLock.WriteLock writeLock = rw.writeLock();

    public Object read() {
        log.debug("Get read lock...");
        readLock.lock();
        try {
            log.debug("read");
            Thread.sleep(1000);
            return data;
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            log.debug("Release read lock");
            readLock.unlock();
        }
        return data;
    }

    public void write(){
        log.debug("Get write lock.....");
        writeLock.lock();
        try {
            log.debug("write in");
        }finally {
            log.debug("Release write lock.....");
            writeLock.unlock();
        }
    }

 

matters needing attention

  1. Read lock does not support conditional variables

  2. Upgrade during reentry is not supported: that is, obtaining a write lock while holding a read lock will cause permanent waiting for obtaining a write lock

r.lock();
        try {
            // ...
            w.lock();  //Read and write locks cannot be obtained at the same time
            try {
                // ...
            } finally{
                w.unlock();
            }
        } finally{
            r.unlock();
        }

Downgrade support on reentry: to obtain a read lock while holding a write lock

class CachedData {
    Object data;
    // Is it valid? If it fails, recalculate the data
    volatile boolean cacheValid;
    final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
    void processCachedData() {
        rwl.readLock().lock();
        if (!cacheValid) {
            // The read lock must be released before acquiring the write lock
            //Upgrade is not supported
            rwl.readLock().unlock();
            rwl.writeLock().lock();
            try {
                // Judge whether other threads have obtained the write lock and updated the cache to avoid repeated updates
                if (!cacheValid) {
                    data = ...
                    cacheValid = true;
                }
                // Demote to read lock and release the write lock, so that other threads can read the cache
                rwl.readLock().lock();
            } finally {

                rwl.writeLock().unlock();
            }
        }
        // When you run out of data, release the read lock
        try {
            use(data);
        } finally {
            rwl.readLock().unlock();
        }
    }
}

Application cache

  1. Cache update policy

When updating, clear the cache first or update the database first?

First clear cache

Update database first

In addition, suppose that when querying data, query thread A happens to cache data due to time expiration, or query for the first time: the probability of this situation is very small

  1. Read / write lock for consistent caching

Code example: use read-write lock to realize a simple on-demand load cache

@Slf4j(topic = "c.TestCache")
public class TestCache {
    public static void main(String[] args) {
        SqlPair sqlPair = new SqlPair();
        for (int i = 0; i < 200; i++) {
            sqlPair.add(new Student(i, "student" + i));
        }
        log.debug("{}", sqlPair.selectByIndex(2));
        log.debug("{}", sqlPair.selectByIndex(2));
        new Thread(() -> {
            try {
                sqlPair.updateByIndex(2, new Student(2, "Updated"));
                Thread.sleep(100);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }

        }, "t2").start();
        new Thread(() -> {
            while (true) {
                log.debug("{}", sqlPair.selectByIndex(2));
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }, "t1").start();
    }

}

@Slf4j(topic = "c.SqlPair")
class SqlPair {
    List<Student> students = new LinkedList<>();
    Hashtable<Integer, Student> hashtable = new Hashtable<>();
    //Create read / write lock
    private ReentrantReadWriteLock rw = new ReentrantReadWriteLock();
    private ReentrantReadWriteLock.ReadLock readLock = rw.readLock();
    private ReentrantReadWriteLock.WriteLock writeLock = rw.writeLock();

    public void add(Student student) {
        students.add(student);
    }

    public Student selectByIndex(int index) {
        readLock.lock();
        try {
            if (hashtable.containsKey(index)) {
                log.debug("Look in the cache....");
                return hashtable.get(index);
            }
            log.debug("Look in the database....");
            Student res = students.get(index);
            hashtable.put(index, res);
            return res;
        }finally {
            readLock.unlock();
        }

    }

    public void updateByIndex(int index, Student student) throws InterruptedException {
        writeLock.lock();
        try {
            log.debug("Update element{} -> {}", students.get(index), student.toString());
            students.set(index, student);
            Thread.sleep(2000);
            log.debug("Delete cache......");
            hashtable.remove(index);
        }finally {
            writeLock.unlock();
        }

    }
}

class Student {
    private int id;
    private String name;

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public Student(int id, String name) {
        this.id = id;
        this.name = name;
    }

    @Override
    public String toString() {
        return "Student{" +
                "id=" + id +
                ", name='" + name + '\'' +
                '}';
    }
}

 

Read write lock principle

Graphic flow

The read-write lock uses the same Sycn synchronizer, so the waiting queue and state are also executed in the same way, as follows:

t1 w.lock,t2 r.lock

1) t1 locks successfully. The process is no different from ReentrantLock locking. The difference is that the write lock status accounts for the lower 16 bits of state, while the read lock uses the upper 16 bits of state

2) t2 execute r.lock. At this time, enter the sync.acquiresshared (1) process of lock reading. First, enter the tryacquisresshared process. If a write lock is occupied, tryAcquireShared returns - 1, indicating failure

The return value of tryAcquireShared indicates

  1. -1 means failure
  2. 0 indicates success, but subsequent nodes will not continue to wake up
  3. A positive number indicates success, and the value is that several subsequent nodes need to wake up. Our read-write lock returns 1

3) At this time, the sync.doAcquireShared(1) process will be entered. First, addWaiter is called to add the node. The difference is that the node is set to Node.SHARED mode instead of Node.EXCLUSIVE mode. Note that t2 is still active at this time

4) t2 will check whether its node is the second. If it is, it will call tryAcquireShared(1) again to try to obtain the lock

5) If it is not successful, cycle for (;;) in doAcquireShared, change the waitStatus of the precursor node to - 1, and then cycle for (;) to try tryAcquireShared(1). If it is not successful, park at parkAndCheckInterrupt()

Also continued:

t3 r.lock,t4 w.lock

In this state, suppose t3 adds a read lock and t4 adds a write lock. During this period, t1 still holds the lock, which becomes the following

Continue to execute t1 w.unlock. At this time, you will go to the sync.release(1) process of write lock. Call sync.tryRelease(1) successfully, and it will look like the following

Next, execute the wake-up process sync.unparkSuccessor, that is, let the dick resume operation. At this time, t2 resumes operation at parkAndCheckInterrupt() in doAcquireShared, and t2 in the figure changes from black to blue (note that here it is only resumed operation, and no lock is obtained!). This time, for (;;) executes tryAcquireShared again. If it succeeds, increase the lock reading count by one

At this time, t2 has resumed operation. Next, t2 calls setHeadAndPropagate(node, 1), and its original node is set as the head node

Before it's over, check whether the next node is shared in setHeadAndPropagate method. If so, call doReleaseShared() to change the state of head from - 1 to 0 and wake up the second. At this time, t3 resumes running at parkAndCheckInterrupt() in doacquishared

This time, for (;;) executes tryAcquireShared again. If it succeeds, the read lock count is incremented by one

At this time, t3 has resumed operation. Next, t3 calls setHeadAndPropagate(node, 1), and its original node is set as the head node

The next node is not shared, so it will not continue to wake up t4

Then proceed with t2 r.unlock, t3 r.unlock t2 into sync.releaseShared(1), and call tryReleaseShared(1) to reduce count, but the count is not zero.

t3 enters sync.releaseShared(1), calls tryReleaseShared(1) to reduce count, this time count is zero, enters doReleaseShared(), changes the header node from -1 to 0, and wakes up the second.

Then t4 resumes running at parkAndCheckInterrupt in acquirequeueueued, and for (;;) again. This time, you are the second and there is no other competition. tryAcquire(1) succeeds, the header node is modified, and the process ends

 

StampedLock

This class is added from JDK 8 to further optimize the read performance. Its feature is that it must be used with the [stamp] when using the read lock and write lock

Add interpretation lock

long stamp = lock.readLock();
lock.unlockRead(stamp);

Add / remove write lock

long stamp = lock.writeLock();
lock.unlockWrite(stamp);

Optimistic reading. StampedLock supports the tryOptimisticRead() method (happy reading). After reading, a stamp verification needs to be done. If the verification passes, it means that there is no write operation during this period and the data can be used safely. If the verification fails, the read lock needs to be obtained again to ensure data security.

long stamp = lock.tryOptimisticRead();
// Check stamp
if(!lock.validate(stamp)){
 // Lock upgrade
}

A data container class is provided, which uses the read() method for reading lock protection data and the write() method for writing lock protection data

Basic use

@Slf4j(topic = "c.TestStampedLock")
public class TestStampedLock {
    public static void main(String[] args) throws InterruptedException {
        DataContainerStamped dataContainer = new DataContainerStamped(1);
        new Thread(()->{
            try {
                dataContainer.read(1);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        },"t1").start();
        Thread.sleep(500);
        new Thread(()->{
            try {
                //dataContainer.read(0);
                dataContainer.writer(2);
            } catch (Exception e) {
                e.printStackTrace();
            }
        },"t2").start();
    }
}
@Slf4j(topic = "c.DataContainerStamped")
class DataContainerStamped{
    private int data;
    private final StampedLock lock = new StampedLock();

    public DataContainerStamped(int data) {
        this.data = data;
    }

    public int read(int readTime) throws InterruptedException {
        //Optimistic read lock
        long stamp = lock.tryOptimisticRead(); //Optimistic read, get a stamp
        log.debug("tryOptimisticRead read locking {}",stamp);
        TimeUnit.SECONDS.sleep(readTime); //Analog read time
        if (lock.validate(stamp)){ //If the verification passes, it means that there is no write operation during this period, and the data can be used safely. If the verification fails, you need to obtain the read lock again to ensure data security.
            log.debug("read finish.....{}",stamp);
            return data;
        }
        //Lock upgrade, upgrade to read lock
        log.debug("updateing to read lock.....{}",stamp);
        try {
            stamp = lock.readLock();
            log.debug("read lock {}",stamp);
            TimeUnit.SECONDS.sleep(readTime);
            log.debug("read finish.....{}",stamp);
            return data;
        }finally {
            log.debug("read unlock {}",stamp);
            lock.unlock(stamp);
        }
    }

    public void writer(int nData){
        long stamp = lock.writeLock();
        log.debug("writer:{}",stamp);
        try {
            Thread.sleep(2000);
            this.data = nData;
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            log.debug("writer unlock {}",stamp);
            lock.unlock(stamp);
        }
    }

}

result:

Reading concurrency is all optimistic reading

  Read write concurrency

 

But StampedLock also has two disadvantages

StampedLock does not support conditional variables

StampedLock does not support reentry

Semaphore

Semaphore used to limit the maximum number of threads that can access shared resources at the same time. Like a parking lot, semaphore is equivalent to the parking space inside. Cars are like threads, but cars can't park indefinitely, so semaphore is like a sign telling how many parking spaces are left.

Basic usage:

public static void main(String[] args) {
        //Permissions: sets the maximum number of accesses
        //Fair: is it fair or unfair
        Semaphore semaphore = new Semaphore(3);
        for (int i = 0; i < 10; i++) {
            new Thread(()->{
                try {
                    log.debug("{}Trying to get a license semaphore",Thread.currentThread().getName());
                    semaphore.acquire();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                try {
                    log.debug("running......");
                    Thread.sleep(1000);
                    log.debug("end.....");
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }finally {
                    //Release license
                    semaphore.release();
                }
            },"t"+i).start();
        }
    }

  Thread cannot acquire lock when semaphore is 0

  Graphic flow

Semaphore is a bit like a parking lot. Permissions is like the number of parking spaces. When a thread obtains permissions, it is like obtaining parking spaces. Then the parking lot displays empty parking spaces minus one. At the beginning, permissions (state) is 3. At this time, five threads obtain resources

Suppose that Thread-1, Thread-2 and Thread-4 cas compete successfully, while Thread-0 and Thread-3 compete failed, and enter the AQS queue park to block

At this time, Thread-4 releases permissions, and the status is as follows

Next, the Thread-0 competition succeeds. The permissions are set to 0 again. Set yourself as the head node. Disconnect the original head node and unpark the next Thread-3 node. However, since the permissions are 0, Thread-3 enters the park state again after unsuccessful attempts

Source code analysis

static final class NonfairSync extends Sync {
    private static final long serialVersionUID = -2694183684443567898L;
    NonfairSync(int permits) {
        // permits is state
        super(permits);
    }

    // Semaphore method, easy to read, put here
    public void acquire() throws InterruptedException {
        sync.acquireSharedInterruptibly(1);
    }
    // The method inherited from AQS is easy to read and placed here
    public final void acquireSharedInterruptibly(int arg)
            throws InterruptedException {
        if (Thread.interrupted())
            throw new InterruptedException();
        if (tryAcquireShared(arg) < 0)
            doAcquireSharedInterruptibly(arg);
    }

    // Trying to get a shared lock
    protected int tryAcquireShared(int acquires) {
        return nonfairTryAcquireShared(acquires);
    }

    // Sync inherited method, easy to read, put here
    final int nonfairTryAcquireShared(int acquires) {
        for (;;) {
            int available = getState();
            int remaining = available - acquires;
            if (
                // If the license has been used up, a negative number is returned, indicating that the acquisition failed. Enter doacquisuresharedinterruptible
                    remaining < 0 ||
                            // If cas retries successfully, a positive number is returned, indicating success
                            compareAndSetState(available, remaining)
            ) {
                return remaining;
            }
        }
    }

    // The method inherited from AQS is easy to read and placed here
    private void doAcquireSharedInterruptibly(int arg) throws InterruptedException {
        final Node node = addWaiter(Node.SHARED);
        boolean failed = true;
        try {
            for (;;) {
                final Node p = node.predecessor();
                if (p == head) {
                    // Try obtaining the license again
                    int r = tryAcquireShared(arg);
                    if (r >= 0) {
                        // After success, the thread goes out of the queue (AQS) and the Node is set to head
                        // If head.waitstatus = = node. Signal = = > 0 succeeds, the next node is unpark
                        // If head.waitstatus = = 0 = = > node.propagate
					  // r indicates the number of available resources. If it is 0, it will not continue to propagate
                        setHeadAndPropagate(node, r);
                        p.next = null; // help GC
                        failed = false;
                        return;
                    }
                }
                // Unsuccessful, set the previous node waitStatus = Node.SIGNAL, and the next round enters park blocking
                if (shouldParkAfterFailedAcquire(p, node) &&
                        parkAndCheckInterrupt())
                    throw new InterruptedException();
            }
        } finally {
            if (failed)
                cancelAcquire(node);
        }
    }

    // Semaphore method, easy to read, put here
    public void release() {
        sync.releaseShared(1);
    }

    // The method inherited from AQS is easy to read and placed here
    public final boolean releaseShared(int arg) {
        if (tryReleaseShared(arg)) {
            doReleaseShared();
            return true;
        }
        return false;
    }

    // Sync inherited method, easy to read, put here
    protected final boolean tryReleaseShared(int releases) {
        for (;;) {
            int current = getState();
            int next = current + releases;
            if (next < current) // overflow
                throw new Error("Maximum permit count exceeded");
            if (compareAndSetState(current, next))
                return true;
        }
    }
}

CountdownLatch

countdownlatch allows count threads to block in one place until the tasks of all threads are completed. In Java concurrency, the concept of countdowncatch is a common interview question, so make sure you understand it well.

CountDownLatch is an implementation of shared lock. By default, the state value of AQS is count. When the thread uses the countDown method, it actually uses the tryrereleaseshared method to reduce the state by CAS operation. Until the state is 0, it means that all threads have called the countDown method. When the await method is called, if the state is not 0, it means that there are still threads that have not called the countDown method. Then put the threads that have called the countDown into the blocking queue Park and judge that the state == 0 until the last thread calls the countDown so that the state == 0. Then the blocked threads will judge that they are successful and all execute down.

Used for thread synchronization and cooperation, waiting for all threads to complete the countdown. The construction parameter is used to initialize the waiting count value, await() is used to wait for the count to return to zero, and countDown() is used to reduce the count by one

The main reason is to avoid using a lot of join. The following is the basic usage

public static void main(String[] args) throws InterruptedException {
        CountDownLatch latch = new CountDownLatch(3);
        ExecutorService service = Executors.newFixedThreadPool(4);
        service.submit(()->{
            log.debug("begin...");
            try {
                sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            latch.countDown();
            log.debug("end...{}", latch.getCount());
        });
        service.submit(()->{
            log.debug("begin...");
            try {
                sleep(1500);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            latch.countDown();
            log.debug("end...{}", latch.getCount());
        });
        service.submit(()->{
            log.debug("begin...");
            try {
                sleep(2000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            latch.countDown();
            log.debug("end...{}", latch.getCount());
        });
        service.submit(()->{
            log.debug("waitStart...");
            try {
                latch.await();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            //latch.countDown();
            log.debug("wait end .......");
        });
    }

Application: it can be used for similar game loading, and everyone can complete the task of the next stage.

CyclicBarri

CyclicBarri[ ˈ sa ɪ kl ɪ k ˈ bæri ɚ] Loop fence is used for thread cooperation and waiting for threads to meet a certain count. When constructing, set the count number. When each thread executes to a time when synchronization is required, call the await() method to wait. When the number of waiting threads meets the count number, continue to execute. It is the same as CountdownLatch, but this can be reused

public static void main(String[] args) throws BrokenBarrierException, InterruptedException {
        CyclicBarrier cb = new CyclicBarrier(2); // Execution will continue only when the number is 2
        for (int i = 1; i <= 3; i++) {
            new Thread(() -> {
                log.debug("{} Ready to execute!", Thread.currentThread().getName());
                try {
                    cb.await(); //Wait when the number of threads is insufficient
                    log.debug("{} Continue execution", Thread.currentThread().getName());
                } catch (InterruptedException | BrokenBarrierException e) {
                    e.printStackTrace();
                }
            }, "t" + i).start();
            new Thread(() -> {
                log.debug("{} Ready to execute!", Thread.currentThread().getName());
                try {
                    Thread.sleep(2000);//Sleep for two seconds before executing thread 2
                    cb.await(); //Wait when the number of threads is insufficient
                    log.debug("{} !!!!Continue execution", Thread.currentThread().getName());
                } catch (InterruptedException | BrokenBarrierException e) {
                    e.printStackTrace();
                }
            }, "t" + (i+1)).start();
        }
    }

However, in the above code, task 1 and task 1 of the second cycle reduce the counter to 0, instead of t1 and t2 executing at the same time. Therefore, we must pay attention to the same number of threads as CyclicBarrier to achieve the best effect

Tags: Java JUC

Posted on Sat, 20 Nov 2021 00:24:56 -0500 by conspi