Kotlin: how to achieve multi-threaded synchronization?

 

Problem background
Multithreaded tasks to be executed: Task 1 and task 2 are executed in parallel; When all execution is completed, execute task 3.

// Each task takes time to simulate through sleep
val task1: () -> String = {
    sleep(2000)
    "Hello".also { println("task1 finished: $it") }
}

val task2: () -> String = {
    sleep(2000)
    "World".also { println("task2 finished: $it") }
}

val task3: (String, String) -> String = { p1, p2 ->
    sleep(2000)
    "$p1 $p2".also { println("task3 finished: $it") }
}

Implementation mode
Multi thread synchronization. Kotlin implements multithreading synchronization mainly including: (including Java implementation)

Method 1: Thread.join
Mode 2: thread lock: Synchronized, ReentrantLock, CountDownLatch, CyclicBarrier
Mode 3: CAS
Mode 4: Future (completable future)
Mode 5: Rxjava
Mode 6: Coroutine and Flow
Method 1: Thread.join()
This is the simplest way to synchronize threads

@Test
fun test_join() {
    lateinit var s1: String
    lateinit var s2: String

    val t1 = Thread { s1 = task1() }
    val t2 = Thread { s2 = task2() }
    t1.start()
    t2.start()

    t1.join()
    t2.join()
    
    task3(s1, s2)

}

Mode 2: thread lock
It mainly includes: Synchronized, ReentrantLock, CountDownLatch and CyclicBarrier

Synchronized

 @Test
    fun test_synchrnoized() {
        lateinit var s1: String
        lateinit var s2: String

        Thread {
            synchronized(Unit) {
                s1 = task1()
            }
        }.start()
        s2 = task2()

        synchronized(Unit) {
            task3(s1, s2)
        }

    }

Special attention should be paid here: in order to synchronize the results of multiple parallel tasks, n locks need to be declared, that is, n synchronized locks need to be nested

ReentrantLock
Compared with Synchronized, the use of ReentrantLock does not cause the problem of nested Synchronized, but multiple locks still need to be created to manage multiple different thread tasks.

fun test_ReentrantLock() {

    lateinit var s1: String
    lateinit var s2: String

    val lock = ReentrantLock()
    Thread {
        lock.lock()
        s1 = task1()
        lock.unlock()
    }.start()
    s2 = task2()

    lock.lock()
    task3(s1, s2)
    lock.unlock()

}

It should be noted that the blocking queue BlockingQueue is internally implemented through ReentrantLock, so it can also realize thread synchronization, but its application scenario is: synchronization in production / consumption scenarios

fun test_blockingQueue() {

    lateinit var s1: String
    lateinit var s2: String

    val queue = SynchronousQueue<Unit>()

    Thread {
        s1 = task1()
        queue.put(Unit)
    }.start()

    s2 = task2()

    queue.take()
    task3(s1, s2)
}

CountDownLatch
Most locks in JUC are implemented based on AQS and can be divided into exclusive locks and shared locks. ReentrantLock is an exclusive lock. In contrast, shared locks are more suitable for this scenario, and there is no need to create separate locks for each task.

 @Test
    fun test_countdownlatch() {

        lateinit var s1: String
        lateinit var s2: String
        val cd = CountDownLatch(2)
        Thread() {
            s1 = task1()
            cd.countDown()
        }.start()

        Thread() {
            s2 = task2()
            cd.countDown()
        }.start()

        cd.await()
        task3(s1, s2)
    }

CyclicBarrier
Principle: let a group of threads reach a synchronization point and then continue to run together. If any thread fails to reach the synchronization point, other threads that have reached the synchronization point will be blocked.

 @Test
    fun test_CyclicBarrier() {

        lateinit var s1: String
        lateinit var s2: String
        val cb = CyclicBarrier(3)

        Thread {
            s1 = task1()
            cb.await()
        }.start()

        Thread() {
            s2 = task1()
            cb.await()
        }.start()

        cb.await()
        task3(s1, s2)

    }

It should be noted that the difference between CountDownLatch and CountDownLatch is that CountDownLatch is disposable, and the CyclicBarrier can be recycled after being reset

Mode 3: CAS
Principle: atomic class counting based on CAS
Application scenario: some cpu intensive short task synchronization (because it will consume resources)

fun test_cas() {

    lateinit var s1: String
    lateinit var s2: String

    val cas = AtomicInteger(2)

    Thread {
        s1 = task1()
        cas.getAndDecrement()
    }.start()

    Thread {
        s2 = task2()
        cas.getAndDecrement()
    }.start()

    while (cas.get() != 0) {}
    task3(s1, s2)
}

It should be noted here that when you see the lock free implementation of CAS, many people will think of volatile: it is not thread safe, because volatile can ensure visibility, but can not ensure atomicity. cnt -- it is not thread safe and requires locking

fun test_Volatile() {
    lateinit var s1: String
    lateinit var s2: String

    Thread {
        s1 = task1()
        cnt--
    }.start()

    Thread {
        s2 = task2()
        cnt--
    }.start()

    while (cnt != 0) {
    }
    task3(s1, s2)
}

Mode 4: Future
Java 1.5 began to provide a thread synchronization method that can return results at the end of task execution: Callable and Future. That is, there is no need to record the results by defining variables.

// Through 'future.get()', you can synchronously wait for the result to return, which is very convenient to write
fun test_future() {

    val future1 = FutureTask(Callable(task1))
    val future2 = FutureTask(Callable(task2))

    Executors.newCachedThreadPool().execute(future1)
    Executors.newCachedThreadPool().execute(future2)

    task3(future1.get(), future2.get())
}

It should be noted here that although future.get() is convenient, it will block threads. Therefore, completable Future is introduced into Java 8: it implements the Future interface and the CompletionStage interface at the same time, which can logically combine multiple completionstages and realize complex asynchronous programming. Thread blocking is avoided in the form of callback

fun test_CompletableFuture() {
    CompletableFuture.supplyAsync(task1)
        .thenCombine(CompletableFuture.supplyAsync(task2)) { p1, p2 ->
             task3(p1, p2)
        }.join()
}

Mode 5: RxJava
RxJava provides thread synchronization operators:

1.subscribeOn is used to start asynchronous tasks
2. The zip operator can combine the results of two Observable

fun test_Rxjava() {

    Observable.zip(
        Observable.fromCallable(Callable(task1))
            .subscribeOn(Schedulers.newThread()),
        Observable.fromCallable(Callable(task2))
            .subscribeOn(Schedulers.newThread()),
        BiFunction(task3)
    ).test().awaitTerminalEvent()
}

Mode 6: Coroutine and Flow
Coroutine is a unique thread synchronization method of Kotlin (the previous methods are actually the thread synchronization method of the Java package itself.)

fun test_coroutine() {

    runBlocking {
        val c1 = async(Dispatchers.IO) {
            task1()
        }

        val c2 = async(Dispatchers.IO) {
            task2()
        }

        task3(c1.await(), c2.await())
    }
}

In particular, the Kotlin version of RxJava synergetic enhanced Flow uses operators similar to RxJava, such as zip:

fun test_flow() {

    val flow1 = flow<String> { emit(task1()) }
    val flow2 = flow<String> { emit(task2()) }
        
    runBlocking {
         flow1.zip(flow2) { t1, t2 ->
             task3(t1, t2)
        }.flowOn(Dispatchers.IO)
        .collect()
// flowOn enables the Task to calculate and transmit results asynchronously.
    }

}

Finally: the follow-up will be continuously updated. If you like it, please pay attention to it.
Related video
[Android advanced] kotlin's generic advanced

Tags: Android kotlin

Posted on Mon, 06 Dec 2021 23:57:03 -0500 by nahydy