1. Common communication between processes
- Anonymous pipe: pipe is a half duplex communication mode. Data can only flow in one direction and can only be used between processes with kinship. The kinship of process usually refers to the parent-child process relationship. For example, cat in Linux is followed by | (anonymous pipeline), and the output of the former part is used as the input of the latter part.
- Named pipeline FIFO: named pipeline is also a half duplex communication mode, but it allows communication between unrelated processes.
- Message queue: message queue is a linked list of messages, stored in the kernel and identified by the message queue identifier. Message queue overcomes the disadvantages of less signal transmission information, the pipeline can only carry unformatted byte stream and the limited buffer size.
- Shared memory + semaphore: shared memory is mapped to a memory that can be accessed by other processes. This shared memory is created by one process, but can be accessed by multiple processes. Shared memory is the fastest IPC mode, which is aimed at other processes. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and communication between processes.
- Semaphore semaphore: a semaphore is a counter that can be used to control the access of multiple processes to shared resources. It is often used as a locking mechanism to prevent other processes from accessing the shared resource when a process is accessing the shared resource. Therefore, it is mainly used as a synchronization means between processes and between different threads in the same process.
- Socket socket: socket is also an inter process communication mechanism, which is more used for process transmission between networks.
- Signal: signal is a complex communication method used to inform the receiving process that an event has occurred. (for example, you can tell the CPU that the hard disk should not deal with other things and deal with the problem immediately).
2. Differences between sleep, wait, yield and join in Java
methodeffectsleep() methodsleep() suspends the execution of the currently executing thread within the specified time, causing the thread to enter the blocking state, but does not release the lock and release the CPUwait() methodCauses the current thread to wait before another thread calls the notify or notifyAll method of the object. The thread will release its "lock flag" and release the CPUyield methodPause the thread object currently executing. yield() only returns the current thread to the ready state, so the thread executing yield() may be executed immediately after entering the executable state. yield() can only give threads with the same priority or higher priority the opportunity to execute without releasing the lock and releasing the cpujoin methodWait for the thread calling the join method to end before continuing execution. Release the lock, release the cpuuse:
sleep(1000); yield(); thread1.wait(); thread1.notifyAll(); thread1.notify(); thread1.join();
join method
Wait for the thread to terminate.
Wait for the thread calling the join method to end before continuing execution. For example: t.join()// It is mainly used to wait for the end of the T thread. If there is no such sentence, the execution of main will be completed, resulting in unpredictable results.
publicclass Test { publicstaticvoid main(String[] args) { Thread t1 = new MyThread1(); t1.start(); for (int i = 0; i < 20; i++) { System.out.println("Main thread" + i +"Execution time!"); if (i > 2)try { //t1 thread is merged into the main thread. The main thread stops the execution process and executes t1 thread instead until t1 execution is completed. t1.join(); } catch (InterruptedException e) { e.printStackTrace(); } } } } class MyThread1 extends Thread { publicvoid run() { for (int i = 0; i < 10; i++) { System.out.println("Thread 1" + i + "Execution time!"); } } } join(long),It can release the lock of the current thread, and other threads can call the synchronization method in the secondary thread. sleep(long)The lock cannot be released.
##3. Kernel mode and user mode
3.1 kernel mode and user mode
Kernel state: when a task (process) executes system calls and falls into kernel code for execution, we call the process in kernel running state (or kernel state for short). At this time, the processor is executing in the kernel code with the highest privilege level (level 0).
User status: when the process is executing the user's own code, it is said to be in user running status (user status). That is, the processor runs in the user code with the lowest privilege level (Level 3).
In short, a process starts executing kernel code due to the execution of system calls. We call the process in the kernel state. For example, calling system functions such as read(), write(), open() is a system call.
When a process executes the application's own code, it is said that the process is in user state
The CPU of intel x86 architecture is divided into several operation levels, from 0 to 3, with 0 as the highest level and 3 as the lowest level
When the operating system just boots, the CPU is in the real mode, which is equivalent to level 0, so the operating system automatically gets the highest permission, and then when it switches to the protection mode, it is level 0. At this time, the operating system takes the lead and becomes the highest level operator. Because your programs are loaded by the operating system, when it loads you, Set your running state to level 3.
3.2 how to convert from user state to kernel state
- system call
- abnormal
- Peripheral interrupt
These three methods are the most important ways for the system to change from user state to kernel state during operation. System calls can be considered as initiated by user processes, while exceptions and peripheral device interrupts are passive
3.3 user space and kernel space
User space is the memory area where the user process is located. In contrast, system space is the memory area occupied by the operating system. All data of user processes and system processes are in memory.
4.ThreadLocal
ThreadLocal is mainly used for data isolation. The filled data only belongs to the current thread, and the variable data is relatively isolated from other threads. In a multi-threaded environment, how to prevent their own variables from being tampered with by other threads.
- ThreadLocal refers to the global variables of a thread, not the global variables of the whole program.
ThreadLocal uses:
public class Main1 { static ThreadLocal threadLocal = new ThreadLocal(); public static void main(String[] args) throws InterruptedException { Thread thread1 = new Thread(()->{ System.out.println(threadLocal.get()); threadLocal.set(0); System.out.println(threadLocal.get()); }); Thread thread2 = new Thread(()->{ System.out.println(threadLocal.get()); threadLocal.set(1); System.out.println(threadLocal.get()); }); thread1.start(); thread1.join();; thread2.start(); } } //result /* null 0 null 1 */
Underlying principle of ThreadLocal:
- Remove after each use: for example: t1.remove();
- The Entry node class in ThreadLocalMap inherits from weak references and will be cleared when gc is used to prevent memory overflow.
ThreadLocal will store itself as a key in ThreadLocalMap when saving. Normally, both key and value should be strongly referenced by the outside world, but now the key is designed to be weakly referenced by WeakReference.
This leads to a problem. When ThreadLocal has no external strong reference, it will be recycled during GC. If the thread creating ThreadLocal continues to run, the value in the Entry object may not be recycled and memory leakage may occur.
For example, the threads in the thread pool are reused. After the previous thread instances are processed, the threads still survive for the purpose of reuse. Therefore, the value set by ThreadLocal is held, resulting in memory leakage.
How to resolve memory leaks:
Just use remove at the end of the code. We just need to remember to clear the value with remove at the end of the code.
static class Entry extends WeakReference<ThreadLocal<?>> { /** The value associated with this ThreadLocal. */ Object value; Entry(ThreadLocal<?> k, Object v) { super(k); value = v; } }
Usage scenario ThreadLocal usage scenario:
1. Solve the permission problem:
@Component public class MyFilter extends OncePerRequestFilter { public static ThreadLocal<Integer> auth = new ThreadLocal<>(); @Override protected void doFilterInternal(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse, FilterChain filterChain) throws ServletException, IOException { auth.set(0); if (httpServletRequest.getHeader("Authorization").equals("1")){ auth.set(1); } doFilter(httpServletRequest,httpServletResponse,filterChain); } }
@GetMapping("/") public ResponseEntity index(){ try{ if(MyFilter.auth.get()==0){ return ResponseEntity.ok("No permission"); }else{ return ResponseEntity.ok("Have authority"); }finally{ MyFilter.auth.remove();//It must be cleared. In the thread pool, the next time the thread is used, it will access the contents of the previous life. } } }
5. Four creation methods of thread pool
- newCachedThreadPool: create a cacheable thread pool. If the length of the thread pool exceeds the processing needs, you can flexibly recycle idle threads. If there is no recyclable thread, you can create a new thread.
- newFixedThreadPool: create a fixed length thread pool to control the maximum concurrent number of threads. The exceeded threads will wait in the queue.
- newScheduledThreadPool: create a fixed length thread pool to support scheduled and periodic task execution. Deferred execution example generation.
- Newsinglethreadexecution: create a singleton thread pool. It will only use a unique worker thread to execute tasks to ensure that all tasks are executed in the specified order (FIFO, LIFO, priority).
5.1 newCachedThreadPool executor [ɪɡˈzekjətər]
newCachedThreadPool creates a cacheable thread pool. If the length of the thread pool exceeds the processing needs, idle threads can be recycled flexibly. If there is no recyclable thread, create a new thread:
public class test11 { public static void main(String[] args) { //1. Create a cacheable thread pool that can be reused ExecutorService executor = Executors.newCachedThreadPool(); //Create 10 threads for(int i=0;i<10;i++){ int temp=i; executor.execute(new Runnable() { @Override public void run() { System.out.println("Thread Name:"+Thread.currentThread().getName()+" ,i:"+temp); } }); } executor.shutdown(); } }
You can see that 10 thread pools were created, but only 5 are used here, because newCachedThreadPool creates a cacheable thread pool. If the length of the thread pool exceeds the processing needs, idle threads can be recycled flexibly. If there is no recyclable thread, new threads can be created
5.2 newFixedThreadPool
Create a fixed length thread pool to control the maximum concurrent number of threads. The exceeded threads will wait in the queue.
public class test22 { public static void main(String[] args) { //1. Create a cacheable thread pool that can be reused ExecutorService executor = Executors.newFixedThreadPool(3); //Create 10 threads for(int i=0;i<10;i++){ int temp=i; executor.execute(new Runnable() { @Override public void run() { System.out.println("Thread Name:"+Thread.currentThread().getName()+" ,i:"+temp); } }); } executor.shutdown(); } }
You can see that newFixedThreadPool creates a fixed length thread pool, which can control the maximum concurrent number of threads. The exceeded threads will wait in the queue.
5.3 newScheduledThreadPool [ˈskedʒuːld]
Create a fixed length routing pool to support regular and periodic task execution. The example code of delayed execution is as follows:
public static void main(String[] args) { //1. Create a timed thread pool ScheduledExecutorService newScheduledThreadPool = Executors.newScheduledThreadPool(5); for (int i = 0; i < 10; i++) { final int temp = i; newScheduledThreadPool.schedule(new Runnable() { public void run() { System.out.println("i:" + temp); } }, 3, TimeUnit.SECONDS); } }
Indicates a delay of 3 seconds.
5.4 newSingleThreadExecutor
Create a singleton thread pool. It will only use a unique worker thread to execute tasks to ensure that all tasks are executed in the specified order (FIFO, LIFO, priority). The example code is as follows:
public class test44 { public static void main(String[] args) { //1. Create a single thread ExecutorService newSingleThreadExecutor = Executors.newSingleThreadExecutor(); for (int i = 0; i < 10; i++) { final int index = i; newSingleThreadExecutor.execute(new Runnable() { @Override public void run() { System.out.println("index:" + index); try { Thread.sleep(200); } catch (Exception e) { // TODO: handle exception } } }); } newSingleThreadExecutor.shutdown(); } }
Note: the results are output in sequence, which is equivalent to executing each task in sequence. Stop the thread when shutdown
But these four are not recommended by Alibaba.
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(3, 3, 20, TimeUnit.MINUTES, new LinkedBlockingQueue<Runnable>());
6. Sequential output ABC problem
6.1 implementation of singleton thread pool
public class test44 { public static class thread11 implements Runnable{ @Override public void run() { char [] ch= {'A','B','C'}; for (int i = 0; i < 100; i++) { final int index = i; int temp = i; System.out.println("index:" + ch[temp %3]); // try { // Thread.sleep(200); // } catch (Exception e) { // // TODO: handle exception // } } } } public static void main(String[] args) { //1. Create a single thread thread11 thread = new thread11(); new Thread(thread).start(); ExecutorService newSingleThreadExecutor = Executors.newSingleThreadExecutor(); newSingleThreadExecutor.execute(thread); newSingleThreadExecutor.shutdown(); } }
6.2 thread counters
public class ThreadDemo4 implements Runnable{ private static CyclicBarrier cyclicBarrier = new CyclicBarrier(3); //Thread counter private static Integer currentCount = 0; private static final Integer MAX_COUNT = 30; private static String [] chars = {"a", "b", "c"}; private String name; public ThreadDemo4(String name) { this.name = name; } @Override public void run() { while(currentCount<MAX_COUNT){ while(this.name.equals(chars[currentCount%3])) printAndPlusOne(this.name + "\t" + currentCount); try { cyclicBarrier.await(); }catch (Exception e){ e.printStackTrace(); } } } public void printAndPlusOne(String name){ System.out.println(name); currentCount ++; } public static void main(String [] args){ ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(3, 3, 20, TimeUnit.MINUTES, new LinkedBlockingQueue<Runnable>()); threadPoolExecutor.execute(new ThreadDemo4("a")); threadPoolExecutor.execute(new ThreadDemo4("b")); threadPoolExecutor.execute(new ThreadDemo4("c")); threadPoolExecutor.shutdown(); } }