Some of the content comes from the following Blogs:
https://blog.csdn.net/bill_xiang_/article/details/81122044
https://www.cnblogs.com/zhaojj/p/8942647.html
1 Classification
By referring to the previous classification when learning collections, you can classify the Map related classes under JUC.
ConcurrentHashMap: inherits from AbstractMap class and is equivalent to thread safe HashMap. It is a thread safe hash table. Before JDK1.7, the segmented lock mechanism was used, while JDK1.8 was implemented by array + linked list + red black tree data structure and CAS atomic operation.
ConcurrentSkipListMap: inherits from the AbstractMap class and is equivalent to a thread safe TreeMap. It is a thread safe ordered hash table. It is realized through "jump table".
2 ConcurrentHashMap
2.1 section locking mechanism of jdk1.7
The main reason why Hashtable is inefficient is that its implementation uses the synchronized keyword to lock put and other operations, while the synchronized keyword locks the whole object. In other words, when modifying the Hash table such as put, the whole Hash table is locked, which makes its performance inefficient.
Therefore, in JDK1.5 to 1.7, Java uses the segment lock mechanism to implement ConcurrentHashMap.
In short, ConcurrentHashMap saves a Segment array in the object, that is, the entire hash table is divided into multiple segments. Each Segment element, that is, each Segment, is similar to a Hashtable. When performing the put operation, first locate the Segment to which the element belongs according to the hash algorithm, and then use ReentrantLock to lock the Segment. Therefore, concurrent HashMap can implement multi-threaded put operation in multi-threaded concurrent programming.
Segment class is an internal class in ConcurrentHashMap, which inherits from ReentrantLock class. ConcurrentHashMap and segment are combined. A ConcurrentHashMap object contains several segment objects. There is a "segment array" member in the ConcurrentHashMap class.
HashEntry is also an internal class of ConcurrentHashMap. It is a one-way linked list node that stores key value pairs. Segment and HashEntry are combinatorial relationships. There are "HashEntry array" members in the segment class, and each HashEntry in the "HashEntry array" is a one-way linked list.
2.2 improvement of JDK1.8
In the JDK 1.7 version, concurrent HashMap is implemented through the Segment lock mechanism, so its maximum concurrency is limited by the number of segments. Therefore, in JDK1.8, the implementation principle of ConcurrentHashMap abandons this design, but selects the method of array + linked list + red black tree similar to HashMap, while locking is realized by CAS atomic update, volatile keyword and synchronized reentry lock.
The implementation of JDK1.8 reduces the granularity of locks. The granularity of JDK1.7 locks is based on Segment and contains multiple hashentries, while the granularity of JDK1.8 locks is HashEntry (head node).
The data structure of JDK1.8 becomes simpler, making the operation clearer and smoother. Because synchronized has been used for synchronization, the concept of Segment lock is not required, and the data structure of Segment is not required. Due to the reduction of granularity, the complexity of implementation is also increased.
The capacity expansion operation of JDK1.8 supports multi-threaded concurrency. In the previous version, if the Segment is in the process of capacity expansion, other write threads will be blocked. In JDK1.8, a write thread triggers the capacity expansion operation. When other write threads perform write operations, they can help it complete the time-consuming operation of capacity expansion.
JDK1.8 uses the red black tree to optimize the linked list. The traversal based on the long linked list is a long process, and the traversal efficiency of the red black tree is very fast, replacing the linked list with a certain threshold.
2.3 important attributes
sizeCtl: Flag controller. This parameter is very important. It appears in each stage of ConcurrentHashMap. Different values also represent different situations and functions:
A negative number indicates that initialization or capacity expansion is in progress- 1 indicates that initialization is in progress- N indicates that N-1 threads are expanding.
When it is 0, it indicates that the hash table has not been initialized.
A positive number indicates the size of the next expansion, similar to the expansion threshold. Its value is always 0.75 times the current capacity. If the actual size of the hash table > = sizectl, the capacity will be expanded.
2.4 construction method
It should be noted that the collection is not initialized in the construction method, but is initialized only when elements are added. This is a lazy loading method.
Moreover, the loadFactor parameter no longer has the meaning of loading factor in JDK1.8. Only for compatibility with previous versions, the loading factor is replaced by sizeCtl.
Similarly, the concurrencyLevel parameter no longer has the meaning of concurrency of multithreading in JDK 1.8, just for compatibility with previous versions.
// Null parameter constructor. public ConcurrentHashMap() { } // Constructor that specifies the initial capacity. public ConcurrentHashMap(int initialCapacity) { // Parameter validity judgment. if (initialCapacity < 0) throw new IllegalArgumentException(); // Provide extra space to avoid capacity expansion immediately after initialization and prevent the initial capacity from being 0. int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY : tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1)); // Set flag controls. this.sizeCtl = cap; } // Specifies the constructor of the initial capacity and load factor. public ConcurrentHashMap(int initialCapacity, float loadFactor) { this(initialCapacity, loadFactor, 1); } // Constructor that specifies initial capacity, load factor, and concurrency. public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { // Parameter validity judgment. if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); // Compare the initial capacity and concurrency, and take the maximum as the initial capacity. if (initialCapacity < concurrencyLevel) initialCapacity = concurrencyLevel; // Calculate the maximum capacity. long size = (long)(1.0 + (long)initialCapacity / loadFactor); int cap = (size >= (long)MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)size); // Set flag controls. this.sizeCtl = cap; } // Contains the constructor for the specified Map collection. public ConcurrentHashMap(Map<? extends K, ? extends V> m) { // Set flag controls. this.sizeCtl = DEFAULT_CAPACITY; // Places the specified collection. putAll(m); }
2.5 initialization method
Set merging is not initialized in the construction method, but only when the set is used. At the same time, the threshold of the set will be set.
The initialization method mainly applies the key attribute sizeCtl. If sizeCtl is less than 0, it indicates that other threads are initializing, and this operation is abandoned. It can also be seen that initialization can only be completed by one thread. If the initialization permission is obtained, use the CAS method to set sizeCtl to - 1 to prevent other threads from entering. After initialization, change the value of sizeCtl to 0.75 times the set capacity as the threshold.
In order to ensure thread safety during initialization, two steps are used:
1) Set sizeCtl to - 1 through CAS atomic update method to ensure that only one thread enters.
2) After the thread obtains the initialization permission, it makes a secondary judgment through "if ((tab = table) = = null | tab. Length = = 0)" to ensure that initialization can be performed only when it is not initialized.
// Initialize the collection, use CAS atomic update to ensure thread safety, and use volatile to ensure sequence and visibility. private final Node<K,V>[] initTable() { Node<K,V>[] tab; int sc; // Loop to complete initialization. while ((tab = table) == null || tab.length == 0) { // If sizeCtl is less than 0, it indicates that initialization is in progress and the current thread is yielding. if ((sc = sizeCtl) < 0) Thread.yield(); // If necessary, initialize and update with CAS atom. Judge whether the sizectl value saved by sizectl is consistent with sc, and if it is consistent, update sizectl to - 1. else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { // After the first thread is initialized, the second thread will come in, so it needs to be judged again. It is similar to the secondary judgment of thread synchronization. if ((tab = table) == null || tab.length == 0) { // If no capacity is specified, the default capacity of 16 is used. int n = (sc > 0) ? sc : DEFAULT_CAPACITY; // Initializes an array of nodes with a specified capacity. @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n]; // Point the node array to the collection. table = tab = nt; // The capacity expansion threshold is 0.75 times of the capacity. The writing method is slightly higher than the direct multiplication. sc = n - (n >>> 2); } } finally { // Set the value of sizeCtl as the threshold. sizeCtl = sc; } break; } } return tab; }
2.6 adding method
1) Check data. Judge whether the passed in key and value are empty. If they are empty, an error will be reported directly. ConcurrentHashMap cannot be empty (HashMap can be empty).
2) Whether to initialize. Judge whether the table is empty. If it is empty, enter the initialization stage.
3) If the bucket specified by the key in the array is empty, the CAS atomic operation is used to insert the key value pair into the bucket as the header node.
4) Assist in capacity expansion. If the hash value in the bucket to be inserted is - 1, that is, the MOVED state, it means that a thread is expanding, and the current thread enters the assisted expansion phase.
5) Insert data. Calculate the hash value in the bucket to be inserted. Less than 0 means red black tree, and greater than or equal to 0 means linked list. If the node is a linked list node, it will traverse the linked list. If there is the same key value, it will update the value value. If the same key value is not found, the data will be inserted at the end of the linked list. If this node is a red black tree node, it needs to be inserted according to the tree insertion rules.
6) Into red and black trees. After the insertion, it is judged that if the number of linked list nodes is greater than 8, the linked list needs to be transformed into a red black tree for storage.
7) After adding, you need to increase the stored quantity and judge whether to expand the capacity.
// Add element. public V put(K key, V value) { return putVal(key, value, false); } // Add element. final V putVal(K key, V value, boolean onlyIfAbsent) { // Exclude null data. if (key == null || value == null) throw new NullPointerException(); // Calculate the hash, shift 16 bits to the right and XOR with itself. The high 16 bits remain unchanged, the low 16 bits add the high 16 bits of data, and finally ensure that a positive number is returned. int hash = spread(key.hashCode()); // Number of nodes. 0 indicates that no new node is added, 2 indicates the number of TreeBin or linked list nodes, and other values indicate the number of linked list nodes. It is mainly used to check whether to change from linked list to red black tree after adding nodes each time. int binCount = 0; // CAS classic writing, unsuccessful, infinite retry. for (Node<K,V>[] tab = table;;) { // Declare node, set length, corresponding array subscript and hash value of node. Node<K,V> f; int n, i, fh; // If not, initialize. Unless a collection is specified during construction, the default construction is not initialized. Check whether it is initialized when adding. It belongs to lazy mode initialization. if (tab == null || (n = tab.length) == 0) // Initialize the collection. tab = initTable(); // If it has been initialized and the node obtained according to the hash is null. else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { // Use CAS to compare whether the index is null. If NULL, insert. CAS mechanism prevents other threads from changing the value. if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value, null))) // Add successfully, jump out of the loop. break; } // If the obtained node is not null and the hash of the node is - 1, it indicates that hash collision occurs and the node is expanding. else if ((fh = f.hash) == MOVED) // Help expand capacity. tab = helpTransfer(tab, f); // Hash collision occurs, and the node has no capacity expansion operation. else { V oldVal = null; // Lock the node. synchronized (f) { // Obtain the tail node and judge whether the tail node has changed. if (tabAt(tab, i) == f) { // Judge whether it is a linked list node. If it is greater than or equal to 0, it means a linked list; if it is less than 0 and not - 1, it means a red black tree. if (fh >= 0) { // Tag linked list nodes. binCount = 1; // After the loop is completed, add nodes to the linked list, and binCount automatically increases to indicate the number of nodes in the linked list. for (Node<K,V> e = f;; ++binCount) { K ek; if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { oldVal = e.val; // If you don't just modify and don't add, replace it. if (!onlyIfAbsent) e.val = value; break; } Node<K,V> pred = e; // After traversing to the tail node, the same node is not found and added as the tail node. if ((e = e.next) == null) { pred.next = new Node<K,V>(hash, key, value, null); break; } } } // If it is a red black tree node. else if (f instanceof TreeBin) { Node<K,V> p; // Mark the red black tree node. binCount = 2; // Try adding if the original value returned is not empty. if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key, value)) != null) { oldVal = p.val; // If you don't just modify and don't add, replace it. if (!onlyIfAbsent) p.val = value; } } } } // If a linked list node is added, you need to further judge whether it needs to be converted to a red black tree. if (binCount != 0) { // If the number of nodes on the linked list is greater than or equal to 8. if (binCount >= TREEIFY_THRESHOLD) // Try turning into a red black tree. treeifyBin(tab, i); if (oldVal != null) // Returns the original value. return oldVal; break; } } } // Add 1 to the set capacity and judge whether to expand it. addCount(1L, binCount); return null; }
2.7 modify the capacity and judge whether it needs to be expanded
1) Try to add operations to baseCount and CounterCell. These operations are based on CAS atomic operations, and volatile is used to ensure order and visibility. The alternate method fullAddCount() will be inserted in an endless loop.
2) Judge whether to expand the capacity, and support multiple threads to assist in capacity expansion.
// Modify the capacity and judge whether to expand it. private final void addCount(long x, int check) { CounterCell[] as; long b, s; // counterCells is not null, or adding baseCount using CAS fails, indicating that concurrency occurs and needs further processing. // counterCells is initially null. If it is not null, it indicates that concurrency has occurred. // If counterCells is still null, but fails to increase baseCount using CAS, it indicates concurrency. if ((as = counterCells) != null || !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) { CounterCell a; long v; int m; boolean uncontended = true; // If counter cells are null or the number of counter cells is less than 0. // Or every element of counterCells is null. // Or it fails to accumulate the values of random positions in the counterCells array. if (as == null || (m = as.length - 1) < 0 || (a = as[ThreadLocalRandom.getProbe() & m]) == null || !(uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) { // Continue to update counterCells and baseCount. fullAddCount(x, uncontended); return; } // When a node is deleted or cleaned up, it is - 1, the first node of the index is 0, and the second node is 1. if (check <= 1) return; // Calculate the number of map elements. s = sumCount(); } // If the value of check is greater than or equal to 0, you need to check whether you want to expand the capacity. It is - 1 when deleting or cleaning up nodes, and it is not checked at this time. if (check >= 0) { Node<K,V>[] tab, nt; int n, sc; // When the number of elements is greater than the threshold, the collection is not empty, and the number of elements is less than the maximum value. Loop judgment to prevent multiple threads from expanding and skipping if judgment at the same time. while (s >= (long)(sc = sizeCtl) && (tab = table) != null && (n = tab.length) < MAXIMUM_CAPACITY) { // Generate tags related to N, and the generated tags must be the same when n remains unchanged. int rs = resizeStamp(n); // sc is greater than or equal to 0 in a single thread. If it is less than 0, it indicates that other threads are expanding. // If it is less than 0, it indicates that a thread has executed the judgment in else, causing rs to shift 16 bits to the left and assign it to sc in the low order + 2. if (sc < 0) { // The sc shifted 16 bits to the left for the first time and the sc shifted 16 bits to the right for the second time is the same as rs, indicating that the capacity expansion has been completed. // When the thread performs capacity expansion, CAS will be used to increase sc automatically. If sc and rs after moving right and accumulating are equal, the capacity expansion has been completed. // When the thread performs capacity expansion, CAS will be used to increase sc automatically. If sc and rs after moving right and accumulating the maximum value are equal, the capacity expansion has been completed. // If the next node is null, the expansion has been completed. // If transferIndex is less than or equal to 0, it indicates that the collection has been expanded and cannot be reassigned. if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||// Here should be SC = = (RS < < resize_stamp_shift) + 1 sc == rs + MAX_RESIZERS ||// Here should be SC = = (RS < < resize_stamp_shift) + max_ RESIZERS (nt = nextTable) == null || transferIndex <= 0) // Jump out of the loop. break; // Use CAS atoms to accumulate the value of sc. if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) // Expansion. transfer(tab, nt); } // If sizecl is greater than or equal to 0, it indicates the first capacity expansion, and CAS is used to set sizecl to a negative number after rs is shifted to the left, and the low order + 2 indicates that 2-1 threads are expanding. else if (U.compareAndSwapInt(this, SIZECTL, sc, (rs << RESIZE_STAMP_SHIFT) + 2)) // Perform capacity expansion. transfer(tab, null); // Calculate the number of map elements and the sum of baseCount and counterCells array. s = sumCount(); } } }
2.8 capacity expansion methods
1) Judge that the collection has been initialized, and the node is of ForwordingNode type (indicating that capacity expansion is in progress), and the child node of the current node is not null. If it is not true, capacity expansion is not required.
2) Loop to judge whether the capacity expansion is successful. If not, use CAS atomic operation to accumulate the number of threads for capacity expansion and assist in capacity expansion.
// Help expand capacity. final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) { Node<K,V>[] nextTab; int sc; // If the table is not null and is a node of type fwd, and the child node of the node is not null. if (tab != null && (f instanceof ForwardingNode) && (nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) { // Get the identifier. int rs = resizeStamp(tab.length); // If the nextTab is not modified concurrently, and the tab is not modified concurrently, and the sizeCtl is less than 0, it indicates that the capacity is still being expanded. while (nextTab == nextTable && table == tab && (sc = sizeCtl) < 0) { // The sc shifted 16 bits to the left for the first time and the sc shifted 16 bits to the right for the second time is the same as rs, indicating that the capacity expansion has been completed. // When the thread performs capacity expansion, CAS will be used to increase sc automatically. If sc and rs after moving right and accumulating are equal, the capacity expansion has been completed. // When the thread performs capacity expansion, CAS will be used to increase sc automatically. If sc and rs after moving right and accumulating the maximum value are equal, the capacity expansion has been completed. // If transferIndex is less than or equal to 0, it indicates that the collection has been expanded and cannot be reassigned. if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||// Here should be SC = = (RS < < resize_stamp_shift) + 1 sc == rs + MAX_RESIZERS ||// Here should be SC = = (RS < < resize_stamp_shift) + max_ RESIZERS transferIndex <= 0) // Jump out of the loop. break; // Use CAS atoms to accumulate the value of sc. if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) { // Expansion. transfer(tab, nextTab); break; } } return nextTab; } return table; }
2.9 expansion method
1) According to the number of CPU cores, it is equally allocated to the interval of the same size for each CPU. If there are less than 16, it is 16 by default.
2) There is and can only be one nextTable built by one thread. This nextTable is an expanded array (the capacity has been expanded).
3) The outer layer uses the for loop to process the root node in each interval, and the inner layer uses the while loop to let the thread get the unexpanded interval.
4) Handle the head node of each interval. If the head node is empty, directly place a ForwordingNode and notify other threads to help expand the capacity.
5) Process the header node of each interval. If the header node is not empty and the hash is not - 1, synchronize the header node and start capacity expansion. Judge whether the head node is a linked list or a red black tree: if it is a linked list, it is divided into high and low linked lists. If it is a red black tree, split it into two high and low red black trees, and judge whether it needs to be converted to a linked list.
// Perform capacity expansion. private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) { int n = tab.length, stride; // Find out the minimum group for capacity expansion according to the number of CPUs. The minimum is 16. if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // Indicates the first expansion, because in addCount() method, the value of nextTab passed in during the first expansion is null. if (nextTab == null) { try { // Create a new expanded node array. @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1]; // Assign the new array to nextTab. nextTab = nt; } catch (Throwable ex) { // Capacity expansion failed. Set sizeCtl to the maximum value. sizeCtl = Integer.MAX_VALUE; return; } // Assign a new array to nextTable. nextTable = nextTab; // Record the maximum value of the interval to be expanded, indicating that it is migrating in reverse order, from high to low. transferIndex = n; } // Set the capacity after expansion. int nextn = nextTab.length; // Create a fwd node for occupation. The hash of the fwd node is - 1 by default. When another thread finds that this slot is a node of fwd type, it skips this node. ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab); // If false, the current position in the interval needs to be processed. If true, the next position in the interval needs to be processed. boolean advance = true; // Completion status. If true, end this method. boolean finishing = false; // Dead loop, i represents the maximum subscript and bound represents the minimum subscript. for (int i = 0, bound = 0;;) { Node<K,V> f; int fh; // The loop determines whether to process the next position on the interval. Each thread will get the interval in this loop. while (advance) { int nextIndex, nextBound; // i. determine whether the self subtraction is greater than or equal to bound and whether the expansion has been completed. // If i is greater than or equal to bound after self subtraction and the expansion is not completed, it indicates that the node at the current i position needs to be processed and the while loop jumps out. // If i is less than bound after self subtraction and the expansion is not completed, it indicates that there are no nodes in the interval to be processed. Continue to interpret in the while loop. // If the expansion has been completed, jump out of the while loop. if (--i >= bound || finishing) // Jump out of the while loop. advance = false; // If the maximum value of the section to be expanded is less than or equal to 0, there is no section to be expanded. else if ((nextIndex = transferIndex) <= 0) { // i will judge in the following if block to enter the completion state judgment. i = -1; // Jump out of the while loop. advance = false; } // When the while loop enters for the first time, CAS judges whether the transferIndex and nextIndex are consistent, and modifies the transferIndex to the maximum value. else if (U.compareAndSwapInt(this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { // The minimum subscript of the current thread processing interval. bound = nextBound; // When i is assigned a value for the first time, the maximum subscript of the processing interval of the current thread. i = nextIndex - 1; // Jump out of the while loop. advance = false; } } // Judge whether the expansion is completed. // If i is less than 0, the last space has been processed. // If i is greater than or equal to the original capacity, it means that the subscript maximum value is exceeded. // If i plus the original capacity is greater than or equal to the new capacity, it means that the subscript maximum value is exceeded. if (i < 0 || i >= n || i + n >= nextn) { int sc; // If the capacity expansion is completed, finishing is true, indicating that the last thread has completed the capacity expansion. if (finishing) { // Delete member variables. nextTable = null; // Update the collection. table = nextTab; // Update threshold. sizeCtl = (n << 1) - (n >>> 1); return; } // If the capacity expansion is not completed, the current thread completes the capacity expansion of this interval and reduces the lower 16 bits of sc by 1. if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { // If it is judged whether it is the last expanded thread, if it is not equal to, it indicates that there are other threads expanding, and the current thread returns. if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT) return; // If it is equal, it means that the current last thread completes the capacity expansion, ends the capacity expansion, and enters the while loop again for inspection. finishing = advance = true; // Cycle through the whole table again. i = n; } } // In the normal processing interval, if the original array i position is null, fwd placeholder is used. else if ((f = tabAt(tab, i)) == null) // If the fwd placeholder is successfully written, enter the while loop and continue to process the next node in the interval. advance = casTabAt(tab, i, null, fwd); // In the normal processing interval, if the original array i position is not null and the hash value is - 1, it indicates that other threads have processed it. else if ((fh = f.hash) == MOVED) // Enter the while loop and continue to process the next node in the interval. advance = true; // Here, it means that this position has an actual value and is not a placeholder. else { // Lock this node to prevent inserting data into the linked list when adding elements. synchronized (f) { // Judge whether the bucket node at i subscript is the same as f, and conduct secondary verification. if (tabAt(tab, i) == f) { // Declare high and low barrels. Node<K,V> ln, hn; // If the hash value of f is greater than 0, it indicates a linked list structure. The default hash of red black tree is - 2. if (fh >= 0) { // Obtain the sum operation result of the hash value of the same node at the highest level of the original capacity, which is used to judge whether the node is placed in the high or low level. int runBit = fh & n; // Define the tail node, temporarily take the f node, and it will be updated later. Node<K,V> lastRun = f; // Traverse this node. for (Node<K,V> p = f.next; p != null; p = p.next) { // Obtain the sum operation result of the hash value of the same node at the highest level of the original capacity, which is used to judge whether the node is placed in the high or low level. int b = p.hash & n; // If the hash value of the node and the hash value of the first node are different from the result of the highest and operation of the original capacity. if (b != runBit) { // Update runBit to determine whether lastRun should be assigned to ln or hn. runBit = b; // Update lastRun to ensure that the following nodes are the same as their own values, so as to avoid unnecessary cycles. lastRun = p; } } // If the last updated runBit is 0, set the low node. if (runBit == 0) { ln = lastRun; hn = null; } // If the last updated runBit is 1, set the high node. else { hn = lastRun; ln = null; } // Loop again and generate two linked lists, with lastRun as the stop condition, so as to avoid unnecessary loops. for (Node<K,V> p = f; p != lastRun; p = p.next) { int ph = p.hash; K pk = p.key; V pv = p.val; // If the result of the and operation is 0, the low node is created. if ((ph & n) == 0) ln = new Node<K,V>(ph, pk, pv, ln); // If the result of the and operation is 1, a high-order node is created. else hn = new Node<K,V>(ph, pk, pv, hn); } // Set the low linked list to the i position of the new array. setTabAt(nextTab, i, ln); // Set the high-order linked list to the i+n position of the new array. setTabAt(nextTab, i + n, hn); // Set the old linked list as a fwd placeholder. setTabAt(tab, i, fwd); // Continue processing the next node of the interval. advance = true; } // If it is a red black tree structure. else if (f instanceof TreeBin) { TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> lo = null, loTail = null; TreeNode<K,V> hi = null, hiTail = null; int lc = 0, hc = 0; // Traversal. for (Node<K,V> e = t.first; e != null; e = e.next) { int h = e.hash; TreeNode<K,V> p = new TreeNode<K,V>(h, e.key, e.val, null, null); // And 0 are placed in the low order. if ((h & n) == 0) { if ((p.prev = loTail) == null) lo = p; else loTail.next = p; loTail = p; ++lc; } // And the result of the operation is 1. else { if ((p.prev = hiTail) == null) hi = p; else hiTail.next = p; hiTail = p; ++hc; } } // If the number of nodes in the tree is less than or equal to 6, it will be converted to a linked list. On the contrary, a new tree will be created. ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc != 0) ? new TreeBin<K,V>(lo) : t; // If the number of nodes in the tree is less than or equal to 6, it will be converted to a linked list. On the contrary, a new tree will be created. hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc != 0) ? new TreeBin<K,V>(hi) : t; // Set the low tree and put it in the i position of the new array. setTabAt(nextTab, i, ln); // Set the high-order number and put it in the i+n position of the new array. setTabAt(nextTab, i + n, hn); // Set the old tree as a fwd placeholder. setTabAt(tab, i, fwd); // Continue processing the next node of the interval. advance = true; } } } } } }
2.10 acquisition method
Return the corresponding key value pair according to the specified key. Since it is a read operation, it does not involve concurrency. The steps are as follows:
1) Judge whether the first node of the array corresponding to the queried key is null.
2) First, judge whether the first node of the array is the object to be searched.
3) If the first node is not and is a red black tree structure, do another processing.
4) If it is a linked list structure, traverse the entire linked list query.
5) If none, null is returned.
// Get element. public V get(Object key) { Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek; // Calculate the hash and ensure that the hash must be greater than zero. A negative number indicates capacity expansion or tree nodes. int h = spread(key.hashCode()); // If the collection is not null, the collection length is greater than 0, and the element at the specified location is not null. if ((tab = table) != null && (n = tab.length) > 0 && (e = tabAt(tab, (n - 1) & h)) != null) { // If hash es are equal. if ((eh = e.hash) == h) { // If the first node is the element to be found. if ((ek = e.key) == key || (ek != null && key.equals(ek))) return e.val; } // If capacity expansion or tree node is in progress. else if (eh < 0) // Try to find the element, find the return element, if not, return null. return (p = e.find(h, key)) != null ? p.val : null; // If it is not the first node, traverse the collection to find it. while ((e = e.next) != null) { if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek)))) return e.val; } } return null; }
2.11 deletion method
The delete operation can be regarded as replacing the original node with null. Therefore, it is combined in this method to realize the delete operation and replace operation together.
Three parameters in the replaceNode() method. Key represents the key to be deleted, value represents the element to be replaced, and cv represents the value corresponding to the key to be deleted.
// Delete element. public V remove(Object key) { return replaceNode(key, null, null); } // Delete element final V replaceNode(Object key, V value, Object cv) { // Calculate the hash and ensure that the hash must be greater than zero. A negative number indicates capacity expansion or tree nodes. int hash = spread(key.hashCode()); // CAS classic writing, unsuccessful, infinite retry. for (Node<K,V>[] tab = table;;) { Node<K,V> f; int n, i, fh; // If the collection is null, or the collection length is 0, or the element at the specified location is null. if (tab == null || (n = tab.length) == 0 || (f = tabAt(tab, i = (n - 1) & hash)) == null) // Jump out of the loop. break; // If the obtained node is not null and the hash of the node is - 1, it indicates that the node is expanding. else if ((fh = f.hash) == MOVED) // Help expand capacity. tab = helpTransfer(tab, f); // hash collision occurs, and there is no capacity expansion operation. else { V oldVal = null; // Whether the synchronization code is entered. boolean validated = false; // Lock the node. synchronized (f) { // Here, volatile obtains the first node and compares it to determine whether it is the first node or not. if (tabAt(tab, i) == f) { // Determine whether it is a linked list node. if (fh >= 0) { validated = true; // Loop through the specified element. for (Node<K,V> e = f, pred = null;;) { K ek; // Element found. if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { V ev = e.val; // If cv is null or cv is not null, the node is updated or deleted only when cv is the same as the value on the specified element. if (cv == null || cv == ev || (ev != null && cv.equals(ev))) { oldVal = ev; // Replace if the new value is not null. if (value != null) e.val = value; // If the new value is null and the current node is not the first node, delete it. else if (pred != null) pred.next = e.next; // If the new value is null and the current node is the first node, delete it. else setTabAt(tab, i, e.next); } break; } pred = e; // If the traversal collection is not found. if ((e = e.next) == null) // Jump out of the loop. break; } } // If it is a red black tree node. else if (f instanceof TreeBin) { validated = true; TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> r, p; // Element found. if ((r = t.root) != null && (p = r.findTreeNode(hash, key, null)) != null) { V pv = p.val; // If cv is null or cv is not null, the node is updated or deleted only when cv is the same as the value on the specified element. if (cv == null || cv == pv || (pv != null && cv.equals(pv))) { oldVal = pv; if (value != null) p.val = value; else if (t.removeTreeNode(p)) setTabAt(tab, i, untreeify(t.first)); } } } } } // If the synchronization code is entered. if (validated) { // If the node is updated or deleted. if (oldVal != null) { // If value is null, it indicates deletion. if (value == null) // Reduce the length of the array by one. addCount(-1L, -1); return oldVal; } break; } } } return null; }
2.12 calculation of aggregate capacity
The baseCount in the ConcurrentHashMap is used to save the total number of elements in the tab, but it is not accurate. Because multiple threads add, delete and modify at the same time, the baseCount modification will fail. At this time, the element changes will be stored in the counterCells array.
When you need to count the current size, you need to add the value of each bucket in counterCells in addition to baseCount.
It is worth mentioning that even so, the statistics are still not the accurate values of the elements in the current tab. In a multi-threaded environment, the thread operation cannot be suspended before and after statistics, so the accuracy cannot be guaranteed.
// Calculate the collection capacity. public int size() { long n = sumCount(); return ((n < 0L) ? 0 : (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE : (int)n); } // Calculate the collection capacity, and the sum of baseCount and counterCells arrays. final long sumCount() { CounterCell[] as = counterCells; CounterCell a; long sum = baseCount; if (as != null) { for (int i = 0; i < as.length; ++i) { if ((a = as[i]) != null) sum += a.value; } } return sum; }
2.13 differences among hashtable, Collections.synchronizedMap(), and ConcurrentHashMap
Hashtable is a thread safe hash table, which ensures thread safety through synchronized; That is, multithreading implements concurrency control through the same "synchronous lock of the object". Hashtable is inefficient when the thread competition is fierce (ConcurrentHashMap is recommended at this time). When one thread accesses the synchronization method of hashtable, other threads may enter the blocking state if they also access the synchronization method of hashtable.
Collections.synchronizedMap() uses the synchronized synchronization keyword to ensure that the operation on the Map is thread safe.
ConcurrentHashMap is a thread safe hash table. In JDK 1.7, it ensures thread safety through "lock segmentation". In essence, it is also a "reentrant lock". The access of multiple threads to the same fragment is mutually exclusive; However, access to different fragments can be synchronized. In JDK 1.8, it is implemented by using CAS atomic update, volatile keyword and synchronized reentrant lock.