## Littler's rule

Ritter's Law derives from queuing theory and is expressed in the following mathematical formulas:

L = λW

Average number of requests present in the L system.

The effective arrival rate for lambda requests.For example: 5/s means that five requests arrive at the system per second.

Average wait time for a W request to execute in the system.

Queuing theory: The subject that studies the random law of queuing phenomenon in service system, and explores the probability law of queuing-related quantitative indicators.

## scene

Let's first assume a store employee adjusts the scene.

#### premise

• One fried chicken per customer at a time;

• It takes 1 minute for each employee to make a fried chicken.

• The shorter the waiting time for a customer to buy fried chicken, the better the experience.

What would you do if you were the owner of a chicken fryer and needed to adjust the employees in the store because of the epidemic this year?

This is essentially a trade-off between employee utilization and customer experience.

1. In order for the customer to have a great experience, you need to maintain the number of employees or increase the number of employees;

2. To avoid waste of resources and control labor costs, idle employees need to be cut.

Suppose there are currently three employees in the store.How do you make employee adjustment decisions.We analyze the following scenarios.

Customer wait times are slightly shorter when average passenger flow = 3 people per minute, experience good, and employee work is saturated.No adjustment is required at this time.

When the average passenger flow is less than 3 people per minute, the customer's waiting time is slightly shorter and the experience is good, but there is always one employee playing soy sauce, at this point you can consider cutting one person.

When the average passenger flow > 3 people per minute customer wait time of 5, 6, 7 is slightly worse, the staff can be increased according to the actual situation.

Average passenger flow per minute_The number of employees is the best.

Thread pool processing is actually a queuing model.The simplified Java thread pool processing model is as follows:

Thread pool task execution approximate phase: commit --> queue or direct execution ---> actual execution

• Average task queue wait time: Total task queue wait time divided by actual execution;

• Average task execution time: Total actual task execution time divided by actual execution number;

We can evaluate tuning thread pool parameters based on the following metrics

When the ratio of thread wait time to response time is too high, it means that there are more tasks queued, evaluates whether the current thread pool size is reasonable, and adjusts it accordingly with the system load.

When the average number of tasks in the thread pool is less than the current thread pool size, the number of threads should be reduced appropriately.

When the average number of processing tasks in the system is greater than the current thread pool size, assess whether the current system is capable of supporting a larger number of threads (such as CPU s, memory, etc.) before adjusting.

## code snippet

```@Slf4j

private final ConcurrentHashMap<Runnable, Long> timeOfRequest = new ConcurrentHashMap<>();
private long lastArrivalTime;

private final AtomicInteger numberOfRequestsRetired = new AtomicInteger();
// Total number of task submissions
private final AtomicInteger numberOfRequests = new AtomicInteger();
private final AtomicLong totalServiceTime = new AtomicLong();
// Total Task Waiting in Queue Consumption
private final AtomicLong totalPoolTime = new AtomicLong();
// Total time spent submitting new tasks
private final AtomicLong aggregateInterRequestArrivalTime = new AtomicLong();

public MonitoredThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit,
super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, handler);
}

@Override
startTime.set(System.nanoTime());
}

@Override
protected void afterExecute(Runnable task, Throwable t) {
try {
long start = startTime.get();
numberOfRequestsRetired.incrementAndGet();
} finally {
if (null != t) {
log.error(AppSystem.ERROR_LOG_PREFIX + "Thread pool handling exception:", Throwables.getRootCause(t));
}
}
}

@Override
long now = System.nanoTime();
numberOfRequests.incrementAndGet();
synchronized (this) {
if (lastArrivalTime != 0L) {
}
lastArrivalTime = now;
}
}
}
```

#### test

Two sets of iteration requests submitting 10 tasks at a time with a thread count of 1

Two sets of iteration requests submitting 10 tasks at a time with 10 threads

Two sets of iteration requests submitting 10 tasks at a time with 50 threads

The above test is one-sided.Reality should be adjusted according to the long-term average of the system.