I won't write hungry Chinese style, just write full Chinese style!!!!
First, we will provide you with a test class. Each example of Singleton code is different and needs to be changed:
package com.example.demo; public class Singleton { private static Singleton instance = null; public static Singleton getInstance() { if (null == instance) { synchronized (Singleton.class) { instance = new Singleton(); } } return instance; } static class ThreadTest extends Thread{ @Override public void run() { for (int i = 0; i < 2; i++) { System.out.println(Thread.currentThread().getName()+"=="+Singleton.getInstance().hashCode()); } } } public static void main(String[] args) { ThreadTest threadTest = new ThreadTest(); Thread thread1 = new Thread(threadTest); Thread thread2 = new Thread(threadTest); Thread thread3 = new Thread(threadTest); Thread thread4 = new Thread(threadTest); Thread thread5 = new Thread(threadTest); thread1.start(); thread2.start(); thread3.start(); thread4.start(); thread5.start(); } }
The first is the first way:
public class Singleton { private static Singleton instance = null; public static Singleton getInstance() { if(null == instance) { instance = new Singleton(); } return instance; } }
As we all know, there will be problems in multithreading, so I won't repeat it
Second:
public class Singleton { private static Singleton instance = null; public synchronized static Singleton getInstance() { if(null == instance) { instance = new Singleton(); } return instance; } }
or
public class Singleton { private static Singleton instance = null; public static Singleton getInstance() { synchronized (Singleton.class) { if(null == instance) { instance = new Singleton(); } } return instance; } }
These two methods are basically written in the same way, that is, the whole actions of get instance and new instance are locked with locks, and only one thread is used to obtain instance each time.
But there is a problem: each thread is a synchronous operation. Compared with logic code, each synchronization lock preparation accounts for most resources, which is not cost-effective.
The third case:
To improve the second writing method, first judge and then enter the synchronization code:
public class Singleton { private static Singleton instance = null; public static Singleton getInstance() { if(null == instance) { synchronized (Singleton.class) { instance = new Singleton(); } } return instance; } }
The improved code will enter the synchronization code only when instance ==Null. Basically, it will not enter the synchronization code after executing new instance once? Is this OK? no way!!! The reason is:
Suppose there are two threads a and B
The following scenarios exist at a certain time:
Thread B is in the synchronized code block just now. Since thread B has not executed the synchronized code block, it means that instance is still null at this time. Then a gets the cpu time slice and successfully passes the judgment of null = = instance. At this time, because the lock is held by B, the a thread only blocks. When B finishes executing the synchronization code block, it also releases the lock. A gets the lock, enters the synchronization code block, and creates an instance.
Therefore, this writing method is also unsafe for multithreading.
Let's improve the following. According to the above statement, even if thread B creates an instance, thread A can still enter the synchronization code block to create objects. Can't we add another judgment to the synchronization code block?
The fourth case:
public class Singleton { private static Singleton instance = null; public static Singleton getInstance() { if(null == instance) { synchronized (Singleton.class) { if(null == instance){ instance = new Singleton(); } } } return instance; } }
emmm, it seems that this problem has been solved!!!!!!!!! Have fun....
But................
After learning the Java virtual machine, we know. new an object has three steps:
1. Allocate memory space
2. Implementation method
3. The reference points to the object (HotSpot uses ThreadLocal to solve the unsafe problem under multithreading. I'm interested in this later)
At the same time, we also know that Java virtual opportunity instruction reordering means that some instructions are not executed in the order we expect. The jvm only needs to ensure that the direct results are consistent.
That brings about the following problems:
If the jvm reorders the order instructions of new insatnce as follows:
1. Allocate memory space
2. The reference points to the memory space
3. Execute the method to initialize the value of memory space
Whether the result has not changed, we have obtained an instance.
Hey, what if the following scenarios occur:
A. B two threads, a enters the critical area, and B waits outside the critical area. Thread a finished new object and exited the critical area. Then B enters the critical zone. Do you think B can be judged by the second null= =insatnce? Generally, it can't, because a has new one, and the instance is not empty. How can B get in. But what about the reordering of instructions? A allocates the object in the process of new object, and the reference also points to the memory space. Hey, a quit synchronizing the code block. Then B went in at this time. Then B creates another instance..
How to solve this problem? It is necessary to add volatile keyword to this object. It is to prohibit instruction reordering (the detailed explanation of volatile keyword will be discussed later if you are interested)
So the final plan is:
public class Singleton { private volatile static Singleton instance = null; public static Singleton getInstance() { if(null == instance) { synchronized (Singleton.class) { if(null == instance) { instance = new Singleton(); } } } return instance; } }