On four thread pools

Transferred from: Micro reading (www.weidianyuedu.com) Micro reading - complete collection of model articles - a website for free learning knowledge

First, let's take a look at the code for obtaining four thread pools:

    ExecutorService fixedThreadPool = Executors.newFixedThreadPool(10);    ExecutorService cachedThreadPool = Executors.newCachedThreadPool();    ExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(10);    ExecutorService singleThreadPool = Executors.newSingleThreadExecutor();

It can be found that these four thread pools are generated by the Executors class. Click the internal implementations of the four methods in turn. It is found that they finally call the constructor of the same ThreadPoolExecutor(), and the difference is that the parameters of the constructor are different. Let's take a look at the parameter list of ThreadPoolExecutor:

public ThreadPoolExecutor(int corePoolSize,                              int maximumPoolSize,                              long keepAliveTime,                              TimeUnit unit,                              BlockingQueue<Runnable> workQueue,                              ThreadFactory threadFactory,                              RejectedExecutionHandler handler) {

It is because of these different parameters that the working mechanisms of the four thread pools are different. Referring to the comments on parameters in the source code, we list the meaning of parameters.

  • corePoolSize: number of core threads. Threads resident in the thread pool will not be destroyed even if they are idle, unless the value of allowCoreThreadTimeOut is set.

  • maximumPoolSize: the maximum number of threads in the thread pool

  • keepAliveTime: extra threads that exceed the number of cores, that is, non core threads, are destroyed after the specified maximum idle time. (assuming that the time is 5s, the number of core threads is 2, and the current thread is 4, the remaining two threads exceeding the number of core threads will be destroyed after 5 seconds of idle.)

  • Unit: time unit

  • workQueue: wait queue

  • threadFactory: the factory where threads are generated

  • handler: how to handle new tasks when the waiting queue is full and the number of thread pools reaches the maximum.

    • AbortPolicy (default): throw an exception directly

    • CallerRunsPolicy: it is executed by the thread where the caller is located. (assuming that the current caller thread is Main, then give it to Main).

    • DiscardOldestPolicy: discard the longest unprocessed task before executing the current task. (the longest unprocessed node in the queue is actually the queue head node. Check the source code and call the poll() method.)

    • Discard policy: discard the task without throwing exceptions.

Working mechanism of thread pool:

When tasks are continuously added to the thread pool and the current number of threads is less than the number of core threads, new threads are added. When the current number of threads reaches the number of core threads, put the task into the waiting queue. When the waiting queue is full, continue to create a new thread. When the number of thread pools reaches the maximum and the waiting queue is full, the denial of service policy is adopted.

Next, we will analyze different thread pools according to parameters:

FixedThreadPool

 public static ExecutorService newFixedThreadPool(int nThreads) {        return new ThreadPoolExecutor(nThreads, nThreads,                                      0L, TimeUnit.MILLISECONDS,                                      new LinkedBlockingQueue<Runnable>());    }

We can see that the number of core threads in corePoolSize is consistent with the maximum number of threads in maximumPoolSize, and keepAliveTime is 0. workQueue is LinkedBlockingQueue, which is a linked list blocking queue. It can be concluded that the thread pool is a fixed number of thread pools and has an unbounded waiting queue. We can deduce that the thread pool is suitable for the scenario with stable processing tasks. For example, if you receive an average of 10 tasks per second, the curve of received tasks will not be very steep.

Suitable for scenario: suitable for a small number of large tasks (large tasks are slow to process. If there are a large number of threads, they will be lost when switching thread context, so the number of threads is controlled to be a certain number).

CachedThreadPool

public static ExecutorService newCachedThreadPool() {        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,                                      60L, TimeUnit.SECONDS,                                      new SynchronousQueue<Runnable>());    }

We can see that the corePoolSize core thread pool is 0, which means that the thread has no core thread pool, which means that threads can be recycled and destroyed, and sometimes the thread pool is empty. And maximumPoolSize is the maximum value of int, which means that the thread pool can create threads indefinitely. keepAliveTime is 60, which means that the thread is idle for 60 seconds. workQueue is a SynchronousQueue. The synchronization queue is a queue without capacity, that is, after a task arrives, it needs to wait for the thread to consume before adding tasks. We deduce that the thread pool is suitable for scenarios where there is little task volume at ordinary times, but sometimes the task volume suddenly increases.

Suitable for scenario: a large number of small tasks (each task is processed quickly and will not occur frequently. When the thread is half processed, switch to other threads).

ScheduledThreadPool

public ScheduledThreadPoolExecutor(int corePoolSize) {        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,              new DelayedWorkQueue());    }

We can see that the biggest difference of the thread pool parameter is that workQueue is DelayedWorkQueue. The queue is a heap sorted by latency from small to large. And when the delay time of the queue head node is less than 0, it returns the node. Therefore, the thread pool can specify a time for scheduled tasks. You can also perform periodic tasks by increasing the delay time when adding tasks.

Suitable for scenarios: scheduled tasks or periodic tasks.

SingleThreadExecutor

public static ExecutorService newSingleThreadExecutor() {        return new FinalizableDelegatedExecutorService            (new ThreadPoolExecutor(1, 1,                                    0L, TimeUnit.MILLISECONDS,                                    new LinkedBlockingQueue<Runnable>()));    }

We can see that both the number of corePoolSize core threads and the maximum number of threads in the thread pool are 1, which means that the thread has and only has a fixed thread. Since it is a single thread, the thread pool implements serial operation without concurrency effect. workQueue is LinkedBlockingQueue, which is a linked list blocking queue. Therefore, the thread pool is suitable for executing tasks in the serial execution queue.

Suitable scenario: tasks processed serially in sequence.

Readers may wonder what keepAliveTime means when it is 0? Do you want to recycle threads now or never?

The keepAliveTime Parameter annotation explicitly indicates that it is only useful for non core threads. We can infer from the source code of ScheduledThreadPool that if 0 represents never recycling, once a non core thread is created in ScheduledThreadPool, it will not be recycled? This is very unreasonable. Therefore, the author believes that 0 represents immediate recycling.

public ScheduledThreadPoolExecutor(int corePoolSize) {        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,              new DelayedWorkQueue());    }

The author's level is limited. If there are any mistakes, please comment and correct them

Tags: Programming Database Multithreading thread pool

Posted on Wed, 01 Dec 2021 11:45:38 -0500 by mattonline