Semaphorer & CountDownLatch & CyclicBarrier in AQS

1, Introduction to Semaphore

Semaphore, commonly known as semaphore, is the java implementation of PV operation primitive in operating system. It is also based on AbstractQueuedSynchronizer.

Semaphore is very powerful. Semaphores with a size of 1 are similar to mutexes, which can be achieved by only one thread obtaining semaphores at the same time. The semaphore with size n (n > 0) can realize the function of current limiting. It can realize that only n threads can obtain semaphores at the same time.

PV operation is an effective method to realize process mutual exclusion and synchronization in operating system. PV operation is related to the processing of semaphore (S). P means passing and V means releasing. When using PV operation to manage shared resources, first ensure the correctness of the execution of PV operation itself.

The main actions of P operation are:
① S minus 1;
② If S minus 1 is still greater than or equal to 0, the process continues to execute;
③ If S minus 1 is less than 0, the process is blocked and placed in the waiting queue waiting for the semaphore, and then transferred to process scheduling.

The main actions of V operation are:
① S plus 1;
② If the added result is greater than 0, the process continues to execute;
③ If the added result is less than or equal to 0, a waiting process will be released from the waiting queue of the signal, and then return to the original process to continue execution or transfer to process scheduling.

1.1 common methods of semaphore

constructor

    /**
     * Creates a {@code Semaphore} with the given number of
     * permits and nonfair fairness setting.
     *
     * @param permits the initial number of permits available.
     *        This value may be negative, in which case releases
     *        must occur before any acquires will be granted.
     */
    public Semaphore(int permits) {
        sync = new NonfairSync(permits);
    }

    /**
     * Creates a {@code Semaphore} with the given number of
     * permits and the given fairness setting.
     *
     * @param permits the initial number of permits available.
     *        This value may be negative, in which case releases
     *        must occur before any acquires will be granted.
     * @param fair {@code true} if this semaphore will guarantee
     *        first-in first-out granting of permits under contention,
     *        else {@code false}
     */
    public Semaphore(int permits, boolean fair) {
        sync = fair ? new FairSync(permits) : new NonfairSync(permits);
    }
  • permits indicates the number of licenses (number of resources)
  • fair indicates fairness. If this is set to true, the next execution thread will be the longest waiting thread

common method

public void acquire() throws InterruptedException
public boolean tryAcquire()
public void release()
public int availablePermits()
public final int getQueueLength() 
public final boolean hasQueuedThreads()
protected void reducePermits(int reduction)
  • acquire() indicates blocking and obtaining permission
  • The tryAcquire() method will immediately return false without permission, and the thread to obtain the permission will not be blocked
  • release() means to release the license
  • Int availablepermissions(): returns the number of currently available licenses in this semaphore.
  • int getQueueLength(): returns the number of threads waiting to acquire licenses.
  • boolean hasQueuedThreads(): whether there are threads waiting to obtain licenses.
  • void reducePermit (int reduction): reduce licenses
  • Collection getQueuedThreads(): returns the collection of all threads waiting to obtain licenses

1.2 application scenarios

It can be used for flow control, especially in application scenarios with limited public resources.

public class SemaphoneTest2 {

    /**
     * Implement a current limiter that can only process 5 requests at the same time
     */
    private static Semaphore semaphore = new Semaphore(5);

    /**
     * Define a thread pool
     */
    private static ThreadPoolExecutor executor = new ThreadPoolExecutor(10, 50, 60, TimeUnit.SECONDS, new LinkedBlockingDeque<>(200));

    /**
     * Simulation execution method
     */
    public static void exec() {
        try {
            semaphore.acquire(1);
            // Simulate real method execution
            System.out.println("implement exec method");
            Thread.sleep(2000);
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            semaphore.release(1);
        }
    }

    public static void main(String[] args) throws InterruptedException {
        {
            for (; ; ) {
                Thread.sleep(100);
                // The simulation request is at a speed of 10 / s
                executor.execute(() -> exec());
            }
        }
    }
}


Now it is implemented in five and five, and the current limiting function is achieved.

1.3 Semaphore source code analysis

Focus:
1. Semaphore's lock unlocking (shared lock) logic implementation
2. The logic implementation of thread contention lock failure queue blocking logic and thread obtaining lock releasing lock wake up blocking thread contention lock

Flow chart of source code tracking:

acquire

// permits: number of licenses obtained
public void acquire(int permits) throws InterruptedException {
    if (permits < 0) throw new IllegalArgumentException();
    sync.acquireSharedInterruptibly(permits);
}
public final void acquireSharedInterruptibly(int arg)
        throws InterruptedException {
    if (Thread.interrupted())
        throw new InterruptedException();
    // Judge whether the remaining resources are < 0
    // If remaining < 0, it means blocking; Otherwise, the lock resource is successfully obtained
    if (tryAcquireShared(arg) < 0)
        doAcquireSharedInterruptibly(arg);
}

The implementation method of tryAcquireShared in unfair Sync is: nonfairTryAcquireShared

final int nonfairTryAcquireShared(int acquires) {
    for (;;) {
    	// semaphorer uses state to indicate the number of resources that can be used
        int available = getState();
        // Calculate remaining resources	
        int remaining = available - acquires;
        // If the number of requests exceeds the number of remaining available resources, the remaining number is returned (it should be negative at this time)
        // If remaining > 0, cas sets the number of remaining resources and returns the number of remaining resources (it should be > = 0 at this time)
        if (remaining < 0 ||
            compareAndSetState(available, remaining))
            return remaining;
    }
}
private void doAcquireSharedInterruptibly(int arg)
   throws InterruptedException {
   // Create a new node
   final Node node = addWaiter(Node.SHARED);
   boolean failed = true;
   try {
   		// Dead loop guarantees contention to lock
       for (;;) {
       		// Get forward node	
           final Node p = node.predecessor();
           // If the leading node is the head node, that is, the current node is the first valid node
           if (p == head) {
           		// Try to obtain the shared lock again to get the calculated number of lock resources remaining
               int r = tryAcquireShared(arg);
               // If the number of remaining lock Resources > = 0, the lock acquisition is successful
               if (r >= 0) {
               		// Set the current node as the head node and disconnect the old head node pointer
                   setHeadAndPropagate(node, r);
                   p.next = null; // help GC
                   failed = false;
                   return;
               }
           }
           	// If the leading node of the current node is not the head node
           	// shouldParkAfterFailedAcquire method judges whether blocking conditions are met (leading node waitStatus=-1)
           if (shouldParkAfterFailedAcquire(p, node) &&
           		// If the blocking condition is met, the thread is blocked and the interrupt state is returned
               parkAndCheckInterrupt())
               // If it is blocked, an exception is thrown	
               throw new InterruptedException();
       }
   } finally {
       if (failed)
           cancelAcquire(node);
   }
}

release

public void release(int permits) {
    if (permits < 0) throw new IllegalArgumentException();
    sync.releaseShared(permits);
}
public final boolean releaseShared(int arg) {
	// After the number of resources state is calculated (the number of recovered resources), the thread is released
    if (tryReleaseShared(arg)) {
        doReleaseShared();
        return true;
    }
    return false;
}
protected final boolean tryReleaseShared(int releases) {
    for (;;) {
        int current = getState();
        // Calculate lock resources	
        int next = current + releases;
        if (next < current) // overflow
            throw new Error("Maximum permit count exceeded");
        // After calculation, cas sets the number of resources, and returns true after setting successfully    
        if (compareAndSetState(current, next))
            return true;
    }
}
private void doReleaseShared() {
   /*
    * Ensure that a release propagates, even if there are other
    * in-progress acquires/releases.  This proceeds in the usual
    * way of trying to unparkSuccessor of head if it needs
    * signal. But if it does not, status is set to PROPAGATE to
    * ensure that upon release, propagation continues.
    * Additionally, we must loop in case a new node is added
    * while we are doing this. Also, unlike other uses of
    * unparkSuccessor, we need to know if CAS to reset status
    * fails, if so rechecking.
    */
   for (;;) {
       Node h = head;
       // If the header node is not empty and is not an empty queue	
       if (h != null && h != tail) {
       		// Get the waiting state of the header node
           int ws = h.waitStatus;
           // If it is - 1, it means that the wake-up thread is satisfied
           if (ws == Node.SIGNAL) {
           		// Set the node waiting status cas to 0
               if (!compareAndSetWaitStatus(h, Node.SIGNAL, 0))
                   continue;            // loop to recheck cases
               // Release blocked thread after successfully setting to 0   
               unparkSuccessor(h);
           }
           // If the wait state is 0 and cas sets the wait state to PROPAGATE, continue to execute the loop after failure
           else if (ws == 0 &&
                    !compareAndSetWaitStatus(h, 0, Node.PROPAGATE))
               continue;                // loop on failed CAS
       }
       if (h == head)                   // loop if head changed
           break;
   }
}

CountDownLatch introduction

CountDownLatch is a synchronization assistance class that allows one or more threads to wait until other threads complete the set of operations.

CountDownLatch is initialized with the given count value. The await method will block until the current count value reaches 0. After the count is 0, all waiting threads will be released, and subsequent calls to the await method will return immediately. This is a one-time phenomenon - count will not be reset. If you need a version to reset count, consider using CyclicBarrier.

2.1 use of countdownlatch

constructor

common method

// The thread calling the await() method is suspended and waits until the count value is 0
public void await() throws InterruptedException { };  
// Similar to await(), if the count value does not change to 0 after waiting for timeout, do not wait and continue to execute
public boolean await(long timeout, TimeUnit unit) throws InterruptedException { };  
// The count is decremented by 1 until it is 0
public void countDown() {
	sync.releaseShared(1);
}

2.2 CountDownLatch application scenario

CountDownLatch is generally used as a multi-threaded countdown counter to force them to wait for another group of tasks (the initialization decision of CountDownLatch) to complete.
There are two usage scenarios for CountDownLatch:

  • Scenario 1: make multiple threads wait
  • Scenario 2: let a single thread wait

Scenario 1 let multiple threads wait: simulate concurrency and let concurrent threads execute together

public class CountDownLatchTest {

    public static void main(String[] args) throws InterruptedException {
        CountDownLatch countDownLatch = new CountDownLatch(1);
        for (int i = 0; i < 5; i++) {
            new Thread(() -> {
                try {
                    //Ready... The athletes are blocked here, waiting for orders
                    countDownLatch.await();
                    String parter = "[" + Thread.currentThread().getName() + "]";
                    System.out.println(parter + "Start execution");
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }).start();
        }

        Thread.sleep(2000);// The referee is ready to give the order
        System.out.println("Start!!!");
        countDownLatch.countDown();// Starting gun: execute the starting order
    }
}


Scenario 2: let a single thread wait: summarize and merge after multiple threads (tasks) are completed

public class CountDownLatchTest2 {
    public static void main(String[] args) throws Exception {

        CountDownLatch countDownLatch = new CountDownLatch(5);
        for (int i = 0; i < 5; i++) {
            final int index = i;
            new Thread(() -> {
                try {
                    Thread.sleep(1000 + ThreadLocalRandom.current().nextInt(1000));
                    System.out.println(Thread.currentThread().getName() + " finish task" + index);

                    countDownLatch.countDown();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }).start();
        }

        // The main thread is blocking. When the counter = = 0, it wakes up the main thread to execute down.
        countDownLatch.await();
        System.out.println("Main thread:After all tasks are completed, the results are summarized");
    }
}

2.3 implementation principle of countdownlatch

The bottom layer is implemented based on AbstractQueuedSynchronizer, and the count specified in the CountDownLatch constructor is directly assigned to the AQS state; Each time countDown() is released (1) minus 1. Finally, when it is reduced to 0, unpark blocks the thread; This step is performed by the last thread that executes the countdown method.

When the await() method is called, the current thread will judge whether the state attribute is 0. If it is 0, it will continue to execute. If it is not 0, the current thread will enter the waiting state until a thread sets the state attribute to 0, which will wake up the thread waiting in the await() method.

The difference between CountDownLatch and Thread.join

  • The function of CountDownLatch is to allow one or more threads to wait for other threads to complete operations. It looks a bit like the join() method, but it provides a more flexible API than join().
  • CountDownLatch can manually control to call the countDown() method n times in n threads to make the counter perform the minus one operation, or call n times in a thread to perform the minus one operation.
  • The implementation principle of join() is to constantly check whether the join thread is alive. If the join thread is alive, let the current thread wait forever. Therefore, CountDownLatch is relatively flexible to use between the two.

The difference between CountDownLatch and CyclicBarrier

CountDownLatch and CyclicBarrier can realize the waiting between threads, but they have different emphases:

  1. The counter of CountDownLatch can only be used once, while the counter of CyclicBarrier can be reset using the reset() method. Therefore, CyclicBarrier can handle more complex business scenarios. For example, if a calculation error occurs, it can reset the counter and let the threads execute it again
  2. CyclicBarrier also provides methods such as getnumberwaiting (to obtain the number of threads blocked by CyclicBarrier), isbroken (to know whether the blocked thread is interrupted), etc.
  3. CountDownLatch will block the main thread, and CyclicBarrier will not block the main thread, but only the child thread.
  4. Both CountDownLatch and CyclicBarrier can realize the waiting between threads, but they have different emphases. CountDownLatch is generally used for one or more threads to wait for other threads to execute tasks. CyclicBarrier is generally used for a group of threads waiting for each other to a certain state, and then this group of threads execute at the same time.
  5. CyclicBarrier can also provide a barrierAction to merge multi-threaded calculation results.
  6. CyclicBarrier realizes the blocking wake-up of a group of threads through the "exclusive lock" and condition of ReentrantLock, while CountDownLatch is realized through the "shared lock" of AQS.

2.4 source code analysis

Construction method: CountDownLatch(int count)

Set the value of state in Sync to count.

public CountDownLatch(int count) {
    if (count < 0) throw new IllegalArgumentException("count < 0");
    this.sync = new Sync(count);
}

Sync(int count) {
    setState(count);
}

await()

public void await() throws InterruptedException {
    sync.acquireSharedInterruptibly(1);
}
public final void acquireSharedInterruptibly(int arg)
        throws InterruptedException {
    // If the thread is interrupted    
    if (Thread.interrupted())
        throw new InterruptedException();
    // Try to obtain the shared lock: judge whether state is set to 0
    // count = 0 return 1, else return -1    
    if (tryAcquireShared(arg) < 0)
    	// Block current main thread
        doAcquireSharedInterruptibly(arg);
    // If count = 0, the main thread continues to execute downward    
}
// Judge whether state is set to 0
protected int tryAcquireShared(int acquires) {
    return (getState() == 0) ? 1 : -1;
}
private void doAcquireSharedInterruptibly(int arg)
    throws InterruptedException {
    // Create a new node
    final Node node = addWaiter(Node.SHARED);
    boolean failed = true;
    try {
        for (;;) {
        	// Get forward node
            final Node p = node.predecessor();
            // If the forward node is the head node, it indicates that this is the first newly created node. It does not need to block the thread and directly compete for the lock
            if (p == head) {
            	// When trying to obtain lock resources, this is actually to judge whether count is equal to 0
                int r = tryAcquireShared(arg);
                // R > = 0 means count=0, which means no blocking
                if (r >= 0) {
                	// Set the current node as the head node
                    setHeadAndPropagate(node, r);
                    p.next = null; // help GC
                    failed = false;
                    return;
                }
            }
            // If the leading node of the current node is not the head node, judge whether the blocking condition is met (the waiting state of the leading node is regarded as equal to - 1, if not, set it to - 1)	
            if (shouldParkAfterFailedAcquire(p, node) &&
            	// If the current node meets the blocking condition, the thread is blocked and an interrupt flag is returned
                parkAndCheckInterrupt())
                throw new InterruptedException();
        }
    } finally {
        if (failed)
            cancelAcquire(node);
    }
}

After await method is called, the tryAcquireShared method area will be called to determine whether count is equal to 0. If so, nothing will be done and the main thread will continue to execute downward;
If count > 0, you need to try to block the main thread. There will be an endless loop. In the endless loop, first try to obtain the lock again, that is, judge whether the count is equal to 0 again. If it is equal to 0, jump out of the loop and do not block the main thread. Otherwise, block.

countDown()

public void countDown() {
    sync.releaseShared(1);
}
public final boolean releaseShared(int arg) {
	// Try to acquire the shared lock, subtract arg (1) from the count value
    if (tryReleaseShared(arg)) {
    	// Release the shared lock after successful subtraction
        doReleaseShared();
        return true;
    }
    return false;
}
protected boolean tryReleaseShared(int releases) {
    // Decrement count; signal when transition to zero
    for (;;) {
    	// Get count
        int c = getState();
        // If it is equal to 0, false is returned, which indicates that obtaining the shared lock failed because count has been equal to 0
        if (c == 0)
            return false;
        // count--    
        int nextc = c-1;
        // cas sets the value of count
        if (compareAndSetState(c, nextc))
        	// After setting successfully, return whether the remaining count is equal to 0
            return nextc == 0;
    }
}
private void doReleaseShared() {
    /*
     * Ensure that a release propagates, even if there are other
     * in-progress acquires/releases.  This proceeds in the usual
     * way of trying to unparkSuccessor of head if it needs
     * signal. But if it does not, status is set to PROPAGATE to
     * ensure that upon release, propagation continues.
     * Additionally, we must loop in case a new node is added
     * while we are doing this. Also, unlike other uses of
     * unparkSuccessor, we need to know if CAS to reset status
     * fails, if so rechecking.
     */
    for (;;) {
        Node h = head;
        // The queue has been initialized and has nodes
        if (h != null && h != tail) {
        	// Get the waiting state of the header node
            int ws = h.waitStatus;
            // If it is - 1, it means that the current node can be awakened
            if (ws == Node.SIGNAL) {
            	// First, try cas to change the waiting state of the header node from - 1 to 0. After failure, continue to cycle until the modification is successful
                if (!compareAndSetWaitStatus(h, Node.SIGNAL, 0))
                    continue;            // loop to recheck cases
                // After the waiting state of the head node is changed to 0, wake up the thread    
                unparkSuccessor(h);
            }
            // If the head node waiting state = 0, cas sets it to - 3	
            else if (ws == 0 &&
                     !compareAndSetWaitStatus(h, 0, Node.PROPAGATE))
                continue;                // loop on failed CAS
        }
        if (h == head)                   // loop if head changed
            break;
    }
}
// Wake up the thread, node = header
private void unparkSuccessor(Node node) {
  /*
    * If status is negative (i.e., possibly needing signal) try
    * to clear in anticipation of signalling.  It is OK if this
    * fails or if status is changed by waiting thread.
    */
   // Get the waiting state of the header node 
   int ws = node.waitStatus;
   // cas sets it to 0 because the first valid node needs to be awakened
   if (ws < 0)
       compareAndSetWaitStatus(node, ws, 0);

   /*
    * Thread to unpark is held in successor, which is normally
    * just the next node.  But if cancelled or apparently null,
    * traverse backwards from tail to find the actual
    * non-cancelled successor.
    */
    // Get the first valid node
   Node s = node.next;
   // If the queue is empty, or the waiting status of the first valid node is > 0 (more than 0 bits, 1 indicates CANCELLED), the CANCELLED node should be skipped
   if (s == null || s.waitStatus > 0) {
       s = null;
       // Traverse from the tail node to find the node that has not been cancelled
       for (Node t = tail; t != null && t != node; t = t.prev)
           if (t.waitStatus <= 0)
               s = t;
   }
   if (s != null)
   		// Wake up thread	
       LockSupport.unpark(s.thread);
}

cas reduces the value of state and wakes up the thread in the blocking node to continue execution.

3, Introduction to CyclicBarrier

It literally means loop fence, through which a group of threads can wait to a certain state (barrier point) and then execute all at the same time. It is called loopback because the CyclicBarrier can be reused after all waiting threads are released.

3.1 use of cyclicbarrier

Construction method

// parties indicates the number of threads intercepted by the barrier. Each thread calls the await method to tell the CyclicBarrier that I have reached the barrier, and then the current thread is blocked.
public CyclicBarrier(int parties)
// It is used to give priority to executing barrierAction when a thread reaches the barrier to facilitate processing more complex business scenarios (the thread's execution time is after reaching the barrier)
public CyclicBarrier(int parties, Runnable barrierAction) 

Important methods

//When all the specified number of threads call the await() method, these threads are no longer blocked
// BrokenBarrierException indicates that the fence has been damaged. The reason for the damage may be that one of the threads was interrupted or timed out while await()
public int await() throws InterruptedException, BrokenBarrierException
public int await(long timeout, TimeUnit unit) throws InterruptedException, BrokenBarrierException, TimeoutException

//The loop can be reset through the reset() method
public void reset() 

3.2 application scenario of cyclicbarrier

CyclicBarrier can be used in the scenario of multi-threaded calculation of data and finally merging the calculation results.

public class CyclicBarrierTest2 {

    //Save the average score of each student
    private ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();

    private ExecutorService threadPool = Executors.newFixedThreadPool(3);

    private CyclicBarrier cb = new CyclicBarrier(3, () -> {
        int result = 0;
        Set<String> set = map.keySet();
        for (String s : set) {
            result += map.get(s);
        }
        System.out.println("The average score of the three is:" + (result / 3) + "branch");
    });


    public void count() {
        for (int i = 0; i < 3; i++) {
            threadPool.execute(new Runnable() {

                @Override
                public void run() {
                    //Get student average
                    int score = (int) (Math.random() * 40 + 60);
                    map.put(Thread.currentThread().getName(), score);
                    System.out.println(Thread.currentThread().getName()
                            + "The average score of students is:" + score);
                    try {
                        //After execution, run await() and wait until the average scores of all students are calculated
                        cb.await();
                    } catch (InterruptedException | BrokenBarrierException e) {
                        e.printStackTrace();
                    }
                }

            });
        }
    }

    public static void main(String[] args) {
        CyclicBarrierTest2 cb = new CyclicBarrierTest2();
        cb.count();
    }
}


Thread 1 and thread 2 above will block at await until thread 3 also executes await. At this time, Runnable barrierAction will be executed to calculate the average score of the three.

The counter of CyclicBarrier can be reset and the barrier can be reused, which can support scenes like "full departure".

public class CyclicBarrierTest3 {

    public static void main(String[] args) {

        AtomicInteger counter = new AtomicInteger();
        ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(
                5, 5, 1000, TimeUnit.SECONDS,
                new ArrayBlockingQueue<>(100),
                r -> new Thread(r, counter.addAndGet(1) + " number "),
                new ThreadPoolExecutor.AbortPolicy());

        CyclicBarrier cyclicBarrier = new CyclicBarrier(5,
                () -> System.out.println("Referee: the game begins~~"));

        for (int i = 0; i < 10; i++) {
            threadPoolExecutor.submit(new Runner(cyclicBarrier));
        }

    }

    static class Runner extends Thread {
        private CyclicBarrier cyclicBarrier;

        public Runner(CyclicBarrier cyclicBarrier) {
            this.cyclicBarrier = cyclicBarrier;
        }

        @Override
        public void run() {
            try {
                int sleepMills = ThreadLocalRandom.current().nextInt(1000);
                Thread.sleep(sleepMills);
                System.out.println(Thread.currentThread().getName() + " The contestants are in position, When preparing to share: " + sleepMills + "ms" + cyclicBarrier.getNumberWaiting());
                cyclicBarrier.await();

            } catch (InterruptedException | BrokenBarrierException e) {
                e.printStackTrace();
            }
        }
    }
}

3.3 difference between cyclicbarrier and CountDownLatch

  • The counter of CountDownLatch can only be used once, while the counter of CyclicBarrier can be reset using the reset() method. Therefore, CyclicBarrier can handle more complex business scenarios. For example, if a calculation error occurs, it can reset the counter and let the threads execute it again
  • CyclicBarrier also provides methods such as getnumberwaiting (to obtain the number of threads blocked by CyclicBarrier), isbroken (to know whether the blocked thread is interrupted), etc.
  • CountDownLatch will block the main thread, and CyclicBarrier will not block the main thread, but only the child thread.
  • Both CountDownLatch and CyclicBarrier can realize the waiting between threads, but they have different emphases. CountDownLatch is generally used for one or more threads to wait for other threads to execute tasks. CyclicBarrier is generally used for a group of threads waiting for each other to a certain state, and then this group of threads execute at the same time.
  • CyclicBarrier can also provide a barrierAction to merge multi-threaded calculation results.
  • CyclicBarrier realizes the blocking wake-up of a group of threads through the "exclusive lock" and condition of ReentrantLock, while CountDownLatch is realized through the "shared lock" of AQS

3.4 analysis of cyclicbarrier source code

Focus:
1. A group of threads wait for each other before triggering the barrier. How is the wake-up logic implemented after the last thread reaches the barrier
2. How is column deletion recycling realized
3. Implementation logic of conversion from condition queue to synchronization queue

structure

// The command to run when tripped 
private final Runnable barrierCommand;

public CyclicBarrier(int parties, Runnable barrierAction) {
    if (parties <= 0) throw new IllegalArgumentException();
    // The number of participants backing up a copy, because the data needs to be restored after reset
    this.parties = parties;
    // Set count
    this.count = parties;
    // Command to execute after triggering
    this.barrierCommand = barrierAction;
}

await()

public int await() throws InterruptedException, BrokenBarrierException {
    try {
        return dowait(false, 0L);
    } catch (TimeoutException toe) {
        throw new Error(toe); // cannot happen
    }
}
private int dowait(boolean timed, long nanos)
    throws InterruptedException, BrokenBarrierException,
           TimeoutException {
    // reentrantLock is used here       
    final ReentrantLock lock = this.lock;
    lock.lock();
    try {
    	// Generation: generation. Create first time
        final Generation g = generation;

        if (g.broken)
            throw new BrokenBarrierException();
		// If the thread is interrupted, an exception is thrown
        if (Thread.interrupted()) {
            breakBarrier();
            throw new InterruptedException();
        }
		
		// Reduce count
        int index = --count;
        // If it is 0 after calculation, the action is triggered
        if (index == 0) {  // tripped
            boolean ranAction = false;
            try {
                final Runnable command = barrierCommand;
                if (command != null)
                    command.run();
                ranAction = true;
                // Update the status of obstacle travel and wake up all. Called only when the lock is held
                nextGeneration();
                return 0;
            } finally {
                if (!ranAction)
                    breakBarrier();
            }
        }

        // loop until tripped, broken, interrupted, or timed out
        for (;;) {
            try {
            	// If not, wait
                if (!timed)
                    trip.await();
                else if (nanos > 0L)
                    nanos = trip.awaitNanos(nanos);
            } catch (InterruptedException ie) {
                if (g == generation && ! g.broken) {
                    breakBarrier();
                    throw ie;
                } else {
                    // We're about to finish waiting even if we had not
                    // been interrupted, so this interrupt is deemed to
                    // "belong" to subsequent execution.
                    Thread.currentThread().interrupt();
                }
            }

            if (g.broken)
                throw new BrokenBarrierException();

            if (g != generation)
                return index;

            if (timed && nanos <= 0L) {
                breakBarrier();
                throw new TimeoutException();
            }
        }
    } finally {
        lock.unlock();
    }
}
private final ReentrantLock lock = new ReentrantLock();
private final Condition trip = lock.newCondition();

// Start the next generation, that is, the end of this competition and start a new round
private void nextGeneration() {
    // signal completion of last generation
    // Wake up all, here is the condition queue using reentrantLock
    trip.signalAll();
    // set up next generation
    // Set count to the initial number of backups
    count = parties;
    generation = new Generation();
}

Tags: Java Spring Cloud Microservices

Posted on Mon, 22 Nov 2021 17:58:11 -0500 by jimbo2150