#1 Basic Concepts
##1.1 concurrency
It has two or more threads at the same time. If the program runs on a single core processor, multiple threads will alternately switch in or out of memory. These threads "exist" at the same time, and each thread is in a certain state during execution. If it runs on a multi-core processor, each thread in the program will be allocated to a processor core, so it can run at the same time
##1.2 high concurrency
One of the factors that must be considered in the architecture design of Internet distributed system is to ensure that the system can process many requests in parallel at the same time
##1.3 difference and connection
- Concurrency: multiple threads operate the same resources to ensure thread safety and rational use of resources
- High concurrency: the service can handle many requests at the same time to improve program performance
#2 CPU
##2.1 CPU multi-level cache
- Why do I need CPU cache
The CPU frequency is too fast to keep up with the main memory
In this way, during the processor clock cycle, the CPU often needs to wait for main memory and waste resources. Therefore, cache appears to alleviate the speed mismatch between CPU and memory (structure: CPU - > cache - > memory) - Significance of CPU cache
- Temporal locality
If a data is accessed, it is likely to be accessed again in the near future - Spatial locality
If a data is accessed, the data adjacent to it may also be accessed soon
##2.2 cache consistency (MESI)
It is used to ensure the consistency of cache shared data among multiple CPU cache s
- Temporal locality
- M-modified modified
The cache line is only cached in the CPU's cache and has been modified, which is inconsistent with the data in the main memory. It needs to be written back to the main memory at a certain time point in the future. This time is before other CPUs can read the corresponding memory in the main memory. After the value here is written to the main memory, the cache line status changes to E - E-exclusive exclusive
The cache line is only cached in the cache of the CPU and has not been modified, which is consistent with the data in main memory
It can be changed into S state when the memory is read by other CPU s at any time, and into M state when it is modified - S-shared share
The cache line can be cached by multiple CPU s, which is consistent with the data in main memory - Invalid I-invalid
- Out of order execution optimization
The processor optimizes the code against the original order in order to improve the operation speed
##Advantages and risks of concurrency
#3 project preparation
##3.1 project initialization
##3.2 concurrent simulation Jmeter pressure measurement
3.3 concurrent simulation - Code
CountDownLatch
###Semaphore
The above two are usually combined with thread pools
Let's start the concurrent simulation
package com.mmall.concurrency; import com.mmall.concurrency.annoations.NotThreadSafe; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; /** * @author JavaEdge * @date 18/4/1 */ @Slf4j @NotThreadSafe public class ConcurrencyTest { /** * Total requests */ public static int clientTotal = 5000; /** * Number of threads executing concurrently */ public static int threadTotal = 200; public static int count = 0; public static void main(String[] args) throws Exception { //Define thread pool ExecutorService executorService = Executors.newCachedThreadPool(); //Define semaphores and give the number of concurrent threads allowed final Semaphore semaphore = new Semaphore(threadTotal); //Statistical counting results final CountDownLatch countDownLatch = new CountDownLatch(clientTotal); //Put request into thread pool for (int i = 0; i < clientTotal ; i++) { executorService.execute(() -> { try { //Acquisition of semaphore semaphore.acquire(); add(); //release semaphore.release(); } catch (Exception e) { log.error("exception", e); } countDownLatch.countDown(); }); } countDownLatch.await(); //Close thread pool executorService.shutdown(); log.info("count:{}", count); } /** * statistical method */ private static void add() { count++; } }
The results of run discovery are random, so it is not thread safe
4 thread safety4.1 thread safety
When multiple threads access a class, no matter what scheduling method the runtime environment adopts or how these processes will execute alternately, and no additional synchronization or cooperation is required in the calling code, this class can show correct behavior, so this class is called thread safe
4.2 atomicity
If it's not done at one go, how can it be impeccable
4.2.1 Atomic package
- AtomicXXX:CAS,Unsafe.compareAndSwapInt
Provides mutually exclusive access, and only one thread can operate on it at a time
package com.mmall.concurrency.example.atomic; import com.mmall.concurrency.annoations.ThreadSafe; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicLong; /** * @author JavaEdge */ @Slf4j @ThreadSafe public class AtomicExample2 { /** * Total requests */ public static int clientTotal = 5000; /** * Number of threads executing concurrently */ public static int threadTotal = 200; /** * Working memory */ public static AtomicLong count = new AtomicLong(0); public static void main(String[] args) throws Exception { ExecutorService executorService = Executors.newCachedThreadPool(); final Semaphore semaphore = new Semaphore(threadTotal); final CountDownLatch countDownLatch = new CountDownLatch(clientTotal); for (int i = 0; i < clientTotal ; i++) { executorService.execute(() -> { try { System.out.println(); semaphore.acquire(); add(); semaphore.release(); } catch (Exception e) { log.error("exception", e); } countDownLatch.countDown(); }); } countDownLatch.await(); executorService.shutdown(); //Main memory log.info("count:{}", count.get()); } private static void add() { count.incrementAndGet(); // count.getAndIncrement(); } }
package com.mmall.concurrency.example.atomic; import com.mmall.concurrency.annoations.ThreadSafe; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.atomic.AtomicReference; /** * @author JavaEdge * @date 18/4/3 */ @Slf4j @ThreadSafe public class AtomicExample4 { private static AtomicReference<Integer> count = new AtomicReference<>(0); public static void main(String[] args) { // 2 count.compareAndSet(0, 2); // no count.compareAndSet(0, 1); // no count.compareAndSet(1, 3); // 4 count.compareAndSet(2, 4); // no count.compareAndSet(3, 5); log.info("count:{}", count.get()); } }
-
AtomicReference,AtomicReferenceFieldUpdater
-
AtomicBoolean
-
Atomic stamp reference: ABA problem of CAS
4.2.2 lock
synchronized: dependent on JVM
- Modifier code block: code enclosed in braces that acts on the calling object
- Decorating method: the whole method, which acts on the called object
- Decorated static method: the entire static method, which acts on all objects
package com.mmall.concurrency.example.count; import com.mmall.concurrency.annoations.ThreadSafe; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; /** * @author JavaEdge */ @Slf4j @ThreadSafe public class CountExample3 { /** * Total requests */ public static int clientTotal = 5000; /** * Number of threads executing concurrently */ public static int threadTotal = 200; public static int count = 0; public static void main(String[] args) throws Exception { ExecutorService executorService = Executors.newCachedThreadPool(); final Semaphore semaphore = new Semaphore(threadTotal); final CountDownLatch countDownLatch = new CountDownLatch(clientTotal); for (int i = 0; i < clientTotal ; i++) { executorService.execute(() -> { try { semaphore.acquire(); add(); semaphore.release(); } catch (Exception e) { log.error("exception", e); } countDownLatch.countDown(); }); } countDownLatch.await(); executorService.shutdown(); log.info("count:{}", count); } private synchronized static void add() { count++; } }
synchronized correction count class method
- Modifier class: the part enclosed in parentheses, which acts on all objects
When a subclass inherits the synchronized modified method of the parent class, there is no synchronized modification!!!
Lock: code implementation depends on special CPU instructions
4.2.3 comparison
- synchronized: non interruptible lock, suitable for non fierce competition and good readability
- Lock: interruptible lock, diversified synchronization, and can maintain normality in fierce competition
- Atomic: it can maintain normality in fierce competition and has better performance than Lock; it can only be synchronized once
Values
4.3 visibility
The changes you make are invisible to others.
The modification of main memory by one thread can be observed by other threads in time
4.3.1 reasons why shared variables are invisible between processes
- Thread cross execution
- Reordering combined with thread cross execution
- The updated value of shared variable is not updated in time between working memory and main memory
4.3.2 synchronized visibility
JMM regulations on synchronized
- Before the thread is unlocked, the latest value of the shared variable must be flushed to the main memory
- When the thread locks, the value of the shared variable in the working memory will be cleared, so that
When using shared variables, you need to re read the latest value from the main memory (locking and unlocking are the same lock)
4.3.3 volatile of visibility
It is realized by adding memory barrier and prohibiting reordering optimization
- When writing to volatile variables, a store will be added after the write operation
The barrier instruction flushes the shared variable value in the local memory to the main memory - When a volatile variable is read, a load will be added before the read operation
Barrier instruction to read shared variables from main memory
- volatile use
volatile boolean inited = false; //Thread 1: context = loadContext(); inited= true; // Thread 2: while( !inited ){ sleep(); } doSomethingWithConfig(context)
4.4 order
Don't play cards according to the routine.
One thread observes the execution order of instructions in other threads. Due to the existence of instruction reordering, the observation is generally disordered
JMM allows the compiler and processor to reorder instructions, but the reordering process will not affect the execution of single threaded programs, but will affect the correctness of multi-threaded concurrent execution
4.4.1 happens before rule
5 release object-
Publish object
Enables an object to be used by code outside the current scope -
Object escape
An erroneous release that causes an object to be seen by other threads before it has been constructed
5.1 safety release object
package com.mmall.concurrency.example.singleton; import com.mmall.concurrency.annoations.NotThreadSafe; /** * Lazy mode - double synchronous lock singleton mode * A singleton instance is created when it is first used * @author JavaEdge */ @NotThreadSafe public class SingletonExample4 { /** * private constructors */ private SingletonExample4() { } // 1. memory = allocate() allocates the memory space of the object // 2. ctorInstance() initializes the object // 3. instance = memory set instance to point to the memory just allocated // JVM and cpu optimization, instruction rearrangement occurred // 1. memory = allocate() allocates the memory space of the object // 3. instance = memory set instance to point to the memory just allocated // 2. ctorInstance() initializes the object /** * Singleton object */ private static SingletonExample4 instance = null; /** * Static factory method * * @return */ public static SingletonExample4 getInstance() { // Dual detection mechanism / / B if (instance == null) { // Synchronous lock synchronized (SingletonExample4.class) { if (instance == null) { // A - 3 instance = new SingletonExample4(); } } } return instance; } }
7.1 introduction
- Using Node to implement FIFO queue can be used to build the basic framework of lock or other synchronization devices
- An int type is used to represent the state
- The method of use is inheritance
- A subclass manipulates its state by inheriting and managing its state through the methods that implement it
- Exclusive lock and shared lock modes (exclusive and shared) can be implemented at the same time
Synchronization component
CountDownLatch
package com.mmall.concurrency.example.aqs; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; /** * @author JavaEdge */ @Slf4j public class CountDownLatchExample1 { private final static int threadCount = 200; public static void main(String[] args) throws Exception { ExecutorService exec = Executors.newCachedThreadPool(); final CountDownLatch countDownLatch = new CountDownLatch(threadCount); for (int i = 0; i < threadCount; i++) { final int threadNum = i; exec.execute(() -> { try { test(threadNum); } catch (Exception e) { log.error("exception", e); } finally { countDownLatch.countDown(); } }); } countDownLatch.await(); log.info("finish"); exec.shutdown(); } private static void test(int threadNum) throws Exception { Thread.sleep(100); log.info("{}", threadNum); Thread.sleep(100); } }
package com.mmall.concurrency.example.aqs; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; /** * Process tasks within a specified time * * @author JavaEdge * */ @Slf4j public class CountDownLatchExample2 { private final static int threadCount = 200; public static void main(String[] args) throws Exception { ExecutorService exec = Executors.newCachedThreadPool(); final CountDownLatch countDownLatch = new CountDownLatch(threadCount); for (int i = 0; i < threadCount; i++) { final int threadNum = i; exec.execute(() -> { try { test(threadNum); } catch (Exception e) { log.error("exception", e); } finally { countDownLatch.countDown(); } }); } countDownLatch.await(10, TimeUnit.MILLISECONDS); log.info("finish"); exec.shutdown(); } private static void test(int threadNum) throws Exception { Thread.sleep(100); log.info("{}", threadNum); } }
##Semaphore usage
##CycliBarrier
package com.mmall.concurrency.example.aqs; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; /** * @author JavaEdge */ @Slf4j public class CyclicBarrierExample1 { private static CyclicBarrier barrier = new CyclicBarrier(5); public static void main(String[] args) throws Exception { ExecutorService executor = Executors.newCachedThreadPool(); for (int i = 0; i < 10; i++) { final int threadNum = i; Thread.sleep(1000); executor.execute(() -> { try { race(threadNum); } catch (Exception e) { log.error("exception", e); } }); } executor.shutdown(); } private static void race(int threadNum) throws Exception { Thread.sleep(1000); log.info("{} is ready", threadNum); barrier.await(); log.info("{} continue", threadNum); } }
package com.mmall.concurrency.example.aqs; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; /** * @author JavaEdge */ @Slf4j public class CyclicBarrierExample2 { private static CyclicBarrier barrier = new CyclicBarrier(5); public static void main(String[] args) throws Exception { ExecutorService executor = Executors.newCachedThreadPool(); for (int i = 0; i < 10; i++) { final int threadNum = i; Thread.sleep(1000); executor.execute(() -> { try { race(threadNum); } catch (Exception e) { log.error("exception", e); } }); } executor.shutdown(); } private static void race(int threadNum) throws Exception { Thread.sleep(1000); log.info("{} is ready", threadNum); try { barrier.await(2000, TimeUnit.MILLISECONDS); } catch (Exception e) { log.warn("BarrierException", e); } log.info("{} continue", threadNum); } }
package com.mmall.concurrency.example.aqs; import lombok.extern.slf4j.Slf4j; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; /** * @author JavaEdge */ @Slf4j public class SemaphoreExample3 { private final static int threadCount = 20; public static void main(String[] args) throws Exception { ExecutorService exec = Executors.newCachedThreadPool(); final Semaphore semaphore = new Semaphore(3); for (int i = 0; i < threadCount; i++) { final int threadNum = i; exec.execute(() -> { try { // Trying to get a license if (semaphore.tryAcquire()) { test(threadNum); // Release a license semaphore.release(); } } catch (Exception e) { log.error("exception", e); } }); } exec.shutdown(); } private static void test(int threadNum) throws Exception { log.info("{}", threadNum); Thread.sleep(1000); } }
#9 thread pool
##9.1 newCachedThreadPool
##9.2 newFixedThreadPool
##9.3 newSingleThreadExecutor
It can be seen that it is executed in sequence
9.4 newScheduledThreadPool
11.1 capacity expansion
11.1 capacity expansion - Database
Cache idea of high concurrency12.1 caching
1 cache features
2 factors affecting cache hit rate
3 cache classification and application scenarios
12.2 high concurrency cache - Introduction to features, scenarios and components
1 Guava Cache
2 cache - Memchche
3 cache - Redis
12.3 use of redis
- Configuration class
- Service class
12.4 high concurrency scenario problems and actual combat
Cache consistency
13 message queue with high concurrency13.1 business case
Encapsulate SMS into a message and put it into the message queue. If there are too many SMS messages and the queue is full, you need to control the sending frequency
By encapsulating events into messages and putting them into queues, business decoupling and asynchronous design are realized, which ensures that SMS will be successfully sent to users as long as the SMS service is normal
13.2 characteristics of message queue
-
Why do I need a message queue
-
advantage
queue
kafka
,