JDK source code reading: concurrent HashMap class reading notes

ConcurrentHashMap public class ConcurrentHashMap<K,V> extends AbstractMap<K,V> implements ConcurrentMap<K...
ConcurrentHashMap
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V> implements ConcurrentMap<K,V>, Serializable { ... }

1. Some important parameters

1.1 MAXIMUM_ Capability parameter
/** * The largest possible table capacity. This value must be * exactly 1<<30 to stay within Java array allocation and indexing * bounds for power of two table sizes, and is further required * because the top two bits of 32bit hash fields are used for * control purposes. */ private static final int MAXIMUM_CAPACITY = 1 << 30;

MAXIMUM_ The capability parameter indicates the maximum capacity of the map. The default value is 1 < < 30.

1.2 DEFAULT_ Capability parameter
/** * The default initial table capacity. Must be a power of 2 * (i.e., at least 1) and at most MAXIMUM_CAPACITY. */ private static final int DEFAULT_CAPACITY = 16;

DEFAULT_ The capability parameter indicates the default capacity of the map, which is 16.

1.3 MAX_ARRAY_SIZE parameter
/** * The largest possible (non-power of two) array size. * Needed by toArray and related methods. */ static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

MAX_ ARRAY_ The size parameter indicates the maximum length of the map array, which may be used in toArray() and its related methods. The size is Integer.MAX_VALUE - 8.

1.4 DEFAULT_CONCURRENCY_LEVEL parameter
/** * The default concurrency level for this table. Unused but * defined for compatibility with previous versions of this class. */ private static final int DEFAULT_CONCURRENCY_LEVEL = 16;

DEFAULT_ CONCURRENCY_ The level parameter indicates the default concurrency level. It has been deprecated in the current version JDK13, but this parameter is reserved for compatibility with previous versions.

1.5 LOAD_FACTOR parameter
/** * The load factor for this table. Overrides of this value in * constructors affect only the initial table capacity. The * actual floating point value isn't normally used -- it is * simpler to use expressions such as {@code n - (n >>> 2)} for * the associated resizing threshold. */ private static final float LOAD_FACTOR = 0.75f;

LOAD_ The factor parameter indicates the loading factor, which is 0.75 by default, the same as HashMap.

1.6 TREEIFY_THRESHOLD parameter
/** * The bin count threshold for using a tree rather than list for a * bin. Bins are converted to trees when adding an element to a * bin with at least this many nodes. The value must be greater * than 2, and should be at least 8 to mesh with assumptions in * tree removal about conversion back to plain bins upon * shrinkage. */ static final int TREEIFY_THRESHOLD = 8;

TREEIFY_ The threshold parameter represents the threshold value of converting the linked list in the array into a red black tree. It is used to compare with the length of a linked list.

1.7 UNTREEIFY_THRESHOLD parameter
/** * The bin count threshold for untreeifying a (split) bin during a * resize operation. Should be less than TREEIFY_THRESHOLD, and at * most 6 to mesh with shrinkage detection under removal. */ static final int UNTREEIFY_THRESHOLD = 6;

UNTREEIFY_ The threshold parameter represents the threshold value for the red black tree in the array to be transformed into a linked list. It is used to compare with the size of a red black tree.

1.8 MIN_ TREEIFY_ Capability parameter
/** * The smallest table capacity for which bins may be treeified. * (Otherwise the table is resized if too many nodes in a bin.) * The value should be at least 4 * TREEIFY_THRESHOLD to avoid * conflicts between resizing and treeification thresholds. */ static final int MIN_TREEIFY_CAPACITY = 64;

MIN_ TREEIFY_ The capability parameter indicates the minimum capacity of the hash table to treelize the linked list. Only when the capacity of the entire ConcurrentHashMap is greater than this value can the specific linked list be treelized. If it is not greater than this value, it will be expanded instead of treelized. (capacity expansion will also reduce the number of elements in a single linked list).

1.9 MIN_TRANSFER_STRIDE parameter
/** * Minimum number of rebinnings per transfer step. Ranges are * subdivided to allow multiple resizer threads. This value * serves as a lower bound to avoid resizers encountering * excessive memory contention. The value should be at least * DEFAULT_CAPACITY.STR */ private static final int MIN_TRANSFER_STRIDE = 16;

In the capacity expansion operation, the transfer step allows multiple threads to be performed concurrently, min_ TRANSFER_ The stride parameter indicates the minimum number of tasks of a worker thread in a transfer operation. That is, the minimum number of consecutive hash buckets to be processed. The default is 16, that is, at least 16 consecutive hash buckets should be transferred. See the analysis of the transfer() method below for details.

1.10 RESIZE_STAMP_BITS parameter (not understood)
/** * The number of bits used for generation stamp in sizeCtl. * Must be at least 6 for 32bit arrays. */ private static final int RESIZE_STAMP_BITS = 16;

RESIZE_ STAMP_ The bits parameter is used to generate a unique generation stamp in each capacity expansion.

1.11 MAX_RESIZERS parameter (not understood)
/** * The maximum number of threads that can help resize. * Must fit in 32 - RESIZE_STAMP_BITS bits. */ private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;

This parameter defines the maximum number of worker threads when resizing, but I don't understand the calculation method. MAX_ RESIZERS = (1 << (32 - resize_STAMP_BITS)) - 1;

1.12 RESIZE_STAMP_SHIFT parameter (not understood)
/** * The bit shift for recording size stamp in sizeCtl. */ private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;

This parameter defines the bit shift of the record size mark in sizeCtl, but I don't understand the calculation method. MAX_RESIZERS = 32 - RESIZE_STAMP_BITS;

1.13 hash status parameters of special nodes
/* * Encodings for Node hash fields. See above for explanation. */ static final int MOVED = -1; // hash for forwarding nodes static final int TREEBIN = -2; // hash for roots of trees static final int RESERVED = -3; // hash for transient reservations

Normally, the hash value should be positive. If it is negative, it indicates that it is an abnormal and special node.

  • When the hash value is - 1, it means that the current node is a Forwarding Node.
    • ForwardingNode is a temporary node that only appears during capacity expansion, and it does not store actual data.
    • If all the nodes in a hash bucket of the old array are migrated to the new array, the old array will place a ForwardingNode in the hash bucket
    • When a ForwardingNode is encountered during a read operation or iterative read operation, the operation is forwarded to the new table array after capacity expansion for execution. When a write operation encounters it, it attempts to help with capacity expansion.
  • When the hash value is - 2, it means that the current node is a TreeBin.
    • TreeBin is a special node in ConcurrentHashMap for agent operation of TreeNode. It holds the root node of the red black tree that stores actual data.
    • Because the red black tree performs write operations, the structure of the whole tree may change greatly, which has a great impact on the read thread. Therefore, TreeBin also needs to maintain a simple read-write lock, which is an important reason for the new introduction of this special node compared with HashMap.
  • When the hash value is - 3, it means that the current node is a reserved node, that is, a placeholder.
    • Generally, it will not appear.
1.14 HASH_BITS parameters
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash

HASH_BITS is also seen in HashTable. Through bit operation with bits, the hash value of negative numbers can be transformed into positive numbers.

1.15 NCPU parameters
/** Number of CPUS, to place bounds on some sizings */ static final int NCPU = Runtime.getRuntime().availableProcessors();

The NCPU parameter can obtain the number of processor cores that can be used by the current JVM.

2. Some important attributes

It is worth noting that the key attributes in ConcurrentHashMap are basically volatile variables.

2.1 table attribute
/** * The array of bins. Lazily initialized upon first insertion. * Size is always a power of two. Accessed directly by iterators. */ transient volatile Node<K,V>[] table;

The table attribute is used for storage nodes and is a collection of buckets.

2.2 nextTable attribute
/** * The next table to use; non-null only while resizing. */ private transient volatile Node<K,V>[] nextTable;

The nextTable property indicates the next array to be used. It is used to assist the resize operation. It is only non empty when resizing.

2.3 baseCount attribute
/** * Base counter value, used mainly when there is no contention, * but also as a fallback during table initialization * races. Updated via CAS. */ private transient volatile long baseCount;

The baseCount property is the basic counter value when there is no contention. It is also used in the contention of the initialization table.

2.4 sizeCtl attribute
/** * Table initialization and resizing control. When negative, the * table is being initialized or resized: -1 for initialization, * else -(1 + the number of active resizing threads). Otherwise, * when table is null, holds the initial table size to use upon * creation, or 0 for default. After initialization, holds the * next element count value upon which to resize the table. */ private transient volatile int sizeCtl;

The sizeCtl attribute plays a role in table initialization and resize operation control.

  • When sizeCtl is negative, it indicates that the table is initializing or resizing.
    • Table initialization is - 1.
    • When the table resize s, it is - (1 + number of capacity expansion threads).
  • When sizecl is positive.
    • Initial table size or 0 when the table is null.
    • When the table is not null, it is the next count value to resize.
2.5 transferIndex attribute
/** * The next table index (plus one) to split while resizing. */ private transient volatile int transferIndex;

The index of the next table to split in resize.

2.6 cellsBusy attribute
/** * Spinlock (locked via CAS) used when resizing and/or creating CounterCells. */ private transient volatile int cellsBusy;

Spin locks used during the resize process and / or the creation of counter cells.

2.7 counterCells array
/** * Table of counter cells. When non-null, size is a power of 2. */ private transient volatile CounterCell[] counterCells;

Obviously, this is the array of counter cells, that is, the array of counting units.

3. Internal class

3.1 Node internal class

The Node inner class is an abstraction of ordinary nodes in the ConcurrentHashMap class.

/** * Key-value entry. This class is never exported out as a * user-mutable Map.Entry (i.e., one supporting setValue; see * MapEntry below), but can be used for read-only traversals used * in bulk tasks. Subclasses of Node with a negative hash field * are special, and contain null keys and values (but are never * exported). Otherwise, keys and vals are never null. */ static class Node<K,V> implements Map.Entry<K,V> { final int hash; final K key; volatile V val; volatile Node<K,V> next; Node(int hash, K key, V val) { this.hash = hash; this.key = key; this.val = val; } Node(int hash, K key, V val, Node<K,V> next) { this(hash, key, val); this.next = next; } public final K getKey() { return key; } public final V getValue() { return val; } public final int hashCode() { return key.hashCode() ^ val.hashCode(); } public final String toString() { return Helpers.mapEntryToString(key, val); } public final V setValue(V value) { throw new UnsupportedOperationException(); } public final boolean equals(Object o) { Object k, v, u; Map.Entry<?,?> e; return ((o instanceof Map.Entry) && (k = (e = (Map.Entry<?,?>)o).getKey()) != null && (v = e.getValue()) != null && (k == key || k.equals(key)) && (v == (u = val) || v.equals(u))); } /** * Virtualized support for map.get(); overridden in subclasses. */ Node<K,V> find(int h, Object k) { Node<K,V> e = this; if (k != null) { do { K ek; if (e.hash == h && ((ek = e.key) == k || (ek != null && k.equals(ek)))) return e; } while ((e = e.next) != null); } return null; } }

significance

The Node internal class is the implementation of the ConcurrentHashMap Node.

Implementation of hashCode()

Note the implementation of hashCode(): Objects.hashCode(key) ^ Objects.hashCode(value);

find()

Here, the find() method of the Node internal class will not be called in general business methods such as get(), because it will be traversed directly in those places. This method will be called in the find() method of the ForwardingNode class.

4. Tools and methods

4.1 spread method
/** * Spreads (XORs) higher bits of hash to lower and also forces top * bit to 0. Because the table uses power-of-two masking, sets of * hashes that vary only in bits above the current mask will * always collide. (Among known examples are sets of Float keys * holding consecutive whole numbers in small tables.) So we * apply a transform that spreads the impact of higher bits * downward. There is a tradeoff between speed, utility, and * quality of bit-spreading. Because many common sets of hashes * are already reasonably distributed (so don't benefit from * spreading), and because we use trees to handle large sets of * collisions in bins, we just XOR some shifted bits in the * cheapest possible way to reduce systematic lossage, as well as * to incorporate impact of the highest bits that would otherwise * never be used in index calculations because of table bounds. */ static final int spread(int h) { return (h ^ (h >>> 16)) & HASH_BITS; }

The hash conflict is reduced by taking the high bit and then performing mask calculation (ensuring that the hash value is positive).

This method is called perturbation method.

4.2 tableSizeFor method
/** * Returns a power of two table size for the given desired capacity. * See Hackers Delight, sec 3.2 */ private static final int tableSizeFor(int c) { int n = -1 >>> Integer.numberOfLeadingZeros(c - 1); return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; }

The tableSizeFor method is used to calculate the resize threshold corresponding to parameter c. It often appears as the following statement.

4.3 comparable class for method
/** * Returns x's Class if it is of the form "class C implements * Comparable<C>", else null. */ static Class<?> comparableClassFor(Object x) { if (x instanceof Comparable) { Class<?> c; Type[] ts, as; ParameterizedType p; // If it is a String, it returns directly if ((c = x.getClass()) == String.class) return c; if ((ts = c.getGenericInterfaces()) != null) { for (Type t : ts) { if ((t instanceof ParameterizedType) && ((p = (ParameterizedType)t).getRawType() == Comparable.class) && (as = p.getActualTypeArguments()) != null && as.length == 1 && as[0] == c) // type arg is c return c; } } } return null; }

If parameter x is an implementation class of a Comparable interface, its type is returned.

4.4 compareComparables method
/** * Returns k.compareTo(x) if x matches kc (k's screened comparable * class), else 0. */ @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable static int compareComparables(Class<?> kc, Object k, Object x) { return (x == null || x.getClass() != kc ? 0 : ((Comparable)k).compareTo(x)); }

If the object x matches the comparable class kc of K, k.compareTo(x) is returned; otherwise, 0 is returned.

4.5 list element access method 4.5.1 tabAt method
static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) { return (Node<K,V>)U.getReferenceAcquire(tab, ((long)i << ASHIFT) + ABASE); }

The tabAt() method can obtain the Node at the i position.

4.5.2 casTabAt method
static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i, Node<K,V> c, Node<K,V> v) { return U.compareAndSetReference(tab, ((long)i << ASHIFT) + ABASE, c, v); }

The casTabAt() method can update the Node at the i location in the form of CAS

4.5.3 setTabAt method
static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v) { U.putReferenceRelease(tab, ((long)i << ASHIFT) + ABASE, v); }

The setTabAt method can set the Node at the i position.

Note: methods like Unsafe.getReferenceAcquire() and Unsafe.putReferenceRelease() are actually the release versions of volatile methods in Unsafe. For example, the latter is the release version of putReferenceVolatile().

4.6 initTable method
private final Node<K,V>[] initTable() { Node<K,V>[] tab; int sc; while ((tab = table) == null || tab.length == 0) { if ((sc = sizeCtl) < 0) // If the sizeCtl attribute is less than 0, it indicates that initialization or resize is in progress Thread.yield(); // lost initialization race; just spin else if (U.compareAndSetInt(this, SIZECTL, sc, -1)) {// If SIZECTL is still sc, it is set to - 1. It indicates that initialization is entered try { if ((tab = table) == null || tab.length == 0) { // Get the initial size (when sc is positive, it is the initial size) int n = (sc > 0) ? sc : DEFAULT_CAPACITY; // Create a node array @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n]; // Assign a value to the table property table = tab = nt; sc = n - (n >>> 2); } } finally { // Finally, remember to update sizeCtl sizeCtl = sc; } break; } } return tab; }

The initTable() method initializes an empty table.

4.7 hashCode method
public int hashCode() { int h = 0; Node<K,V>[] t; if ((t = table) != null) { Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length); for (Node<K,V> p; (p = it.advance()) != null; ) h += p.key.hashCode() ^ p.val.hashCode(); } return h; }

The hashCode() method is to traverse each key value pair, make their key and value hash codes different or, and then stack them all.

4.8 addCount method

The addCount() method will be called when the number of ConcurrentHashMap elements changes. The first of the two parameters is the number change value, and the second is the parameter that controls whether expansion check is required.

private final void addCount(long x, int check) { // Create counter cell CounterCell[] cs; long b, s; /** 1.If counterCells is null: Then, it indicates that there has been no concurrency conflict before. Then, U.compareAndSetLong(...,b+x) will be executed to directly update the count value baseCount. If the local method is executed successfully, it will return true, and if it is reversed, it will be false. Then, the whole if determines that the two conditions are false, and the contents in the if block are not executed. 2.If couterCells is not null: It indicates that concurrency conflicts have occurred before, and the following if block processing is required. Here, if the first condition is true, the update method of the second condition will not be executed. */ if ((cs = counterCells) != null || !U.compareAndSetLong(this, BASECOUNT, b = baseCount, s = b + x)) { // Enter the if block, indicating that there has been a concurrency conflict, then add the value to the CounterCell CounterCell c; long v; int m; boolean uncontended = true; if (cs == null // cs becomes null again in concurrency || (m = cs.length - 1) < 0 // cs length less than 1 || (c = cs[ThreadLocalRandom.getProbe() & m]) == null // The corresponding CouterCell is null || !(uncontended = U.compareAndSetLong(c, CELLVALUE, v = c.value, v + x))) {// Attempt to update the value of the found count cell c // If the update fails. Generally, the method in the last condition above returns false, and the reverse is true // Description there is a concurrency conflict in the CounterCells array, which may involve the expansion of the array. Call the fullAddCount method fullAddCount(x, uncontended); return; } if (check <= 1)// If there is no need to check, return directly return; // Count and save it in s. the following is used for inspection s = sumCount(); } // Check whether capacity expansion is required if (check >= 0) { Node<K,V>[] tab, nt; int n, sc; while (s >= (long)(sc = sizeCtl) // The number of elements is greater than the capacity expansion threshold: capacity expansion is required && (tab = table) != null // Table is not empty && (n = tab.length) < MAXIMUM_CAPACITY) {// Table length does not reach the upper limit int rs = resizeStamp(n) << RESIZE_STAMP_SHIFT; // If you are performing resize if (sc < 0) { // Give up some conditions to help expand capacity if (sc == rs + MAX_RESIZERS || sc == rs + 1 || (nt = nextTable) == null || transferIndex <= 0) break; // sc+1 indicates that a new thread is added to help expand the capacity if (U.compareAndSetInt(this, SIZECTL, sc, sc + 1)) transfer(tab, nt); } // Currently, resizing is not being executed. Try to become the first thread to enter the capacity expansion. Set sc to rs+2 else if (U.compareAndSetInt(this, SIZECTL, sc, rs + 2)) transfer(tab, null); // Recalculate the number of elements s = sumCount(); } } }

See the code comments for detailed logic. Here are a few separate points.

  • The first if judgment condition is wonderful. Check whether the value should be added directly to baseCount or to the corresponding counter cell according to whether the counter cells array is null.
  • Note how to find the slot position in the counter cells array: C = CS [threadlocalrandom. Getprobe() & M]) = = null.
  • When the check parameter is less than or equal to 1, exit without checking. When it is greater than 1, check whether capacity expansion is required after the main logic of addCount is completed. When the put method calls addCount, the check parameter passed in is actually the number of nodes traversed during the put process, so the logic is connected: if there is only one node or it is empty, it is not necessary to consider whether to check the expansion again; Otherwise, check in addCoumt.
4.9 helpTransfer method

The helpTransfer method can assist in data migration and return a new array when the node is resizing. This method is called in business methods such as put and remove.

/** * Helps transfer if a resize is in progress. */ final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) { Node<K,V>[] nextTab; int sc; // Three conditions need to be met simultaneously to enter the main logic of the method if (tab != null// Table is not empty && (f instanceof ForwardingNode)// f is a Forwarding Node && (nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) // nextTable is not empty { // Calculate the mark "stamp" during this resize int rs = resizeStamp(tab.length) << RESIZE_STAMP_SHIFT; while (nextTab == nextTable // nextTab unchanged && table == tab // table unchanged && (sc = sizeCtl) < 0) // Sizecl remains less than 0 (resizing) { if (sc == rs + MAX_RESIZERS // The number of worker threads is full || sc == rs + 1 // In the addCount method, if there is the first capacity expansion thread, sc=rs+2. If it becomes rs+1, the expansion is over. || transferIndex <= 0) // If transferIndex is less than or equal to 0, it actually indicates that the expansion has been completed and the subscript adjustment has been entered. break; // Enable sc + + to enter capacity expansion if (U.compareAndSetInt(this, SIZECTL, sc, sc + 1)) { transfer(tab, nextTab); break; } } // Return to new table return nextTab; } // Return to original table return table; }
4.10 transfer method

The function of the transfer method is to move and / or copy the nodes in each bin to a new table. There are calls in addCount() and helpTransfer(), which are the core implementation classes of capacity expansion.

If there is a specific number in the following example, the length of the incoming tab shall be 16.

private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) { // Define n as the table length. int n = tab.length, stride; /** stride Represents the number of tasks of a worker thread in a transfer, that is, the number of consecutive hash buckets to be processed. Initialize stripe: if the number of available CPU cores is greater than 1, initialize to (n > > > 3) / ncpu; otherwise, initialize to n. If the initialized stripe is less than MIN_TRANSFER_STRIDE, set it to this minimum. */ if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // subdivide range if (nextTab == null) { // If nextTab is not initialized, initialize the array first try { @SuppressWarnings("unchecked")' // Create a nextTab array with the length of the original array * 2 Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1]; nextTab = nt; } catch (Throwable ex) { // Failed to create a new array. sizeCtl is set to the maximum value of int sizeCtl = Integer.MAX_VALUE; return; } // This array is assigned to nextTable nextTable = nextTab; // Update transfer subscript transferIndex = n; } int nextn = nextTab.length; // Create ForwardingNode fwd and pass in nextTab as the parameter ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab); // The first advance is true. If it is equal to true, it indicates that a subscript (i --) needs to be pushed again. On the contrary, if it is false, the subscript cannot be pushed. The current subscript needs to be processed before proceeding boolean advance = true; // Mark whether the expansion has been completed boolean finishing = false; // to ensure sweep before committing nextTab /** It is also a for loop to process the linked list elements in each slot */ for (int i = 0, bound = 0;;) { Node<K,V> f; int fh; /** This while loop continuously tries to allocate tasks to the current thread through CAS until the allocation succeeds or the task queue has been fully allocated. If the thread has been allocated a bucket area, it will point to the next pending bucket through -- i and exit the loop. */ while (advance) { int nextIndex, nextBound; // --i indicates entering the next bucket to be processed. Greater than or equal to bound after subtraction indicates that the current thread has allocated buckets, and advance=false if (--i >= bound || finishing) advance = false; // All bucket s have been allocated. Assign value to nextIndex. else if ((nextIndex = transferIndex) <= 0) { i = -1; advance = false; } // CAS modifies TRANSFERINDEX to assign tasks to threads. // The processing node interval is (nextBound,nextINdex) else if (U.compareAndSetInt (this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { bound = nextBound; i = nextIndex - 1; advance = false; } } // Processing process // CASE1: the old array has been traversed, and the current thread has processed all responsible bucket s if (i < 0 || i >= n || i + n >= nextn) { int sc; // Capacity expansion completed if (finishing) { // Delete the member variable nextTable nextTable = null; // Update array table = nextTab; // Update capacity expansion threshold sizeCtl = (n << 1) - (n >>> 1); return; } // Use the CAS operation to subtract 1 from the lower 16 bits of sizeCtl, which means that you have completed your own task if (U.compareAndSetInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT) return; // If the above if is not executed, i.e. (SC - 2) = = resizestamp (n) < < resize_ STAMP_ SHIFT // This indicates that there is no thread for capacity expansion, and the capacity expansion is over finishing = advance = true; i = n; // recheck before commit } } // CASE2: if node i is empty, put it into the ForwardingNode just initialized else if ((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); // CASE3: the current hash value of this location is MOVED, which is a ForwardingNode. It indicates that it has been processed by other threads, so it is required to continue else if ((fh = f.hash) == MOVED) advance = true; // already processed // CASE4: execute transfer else { // Lock the head node synchronized (f) { // Check again if (tabAt(tab, i) == f) { Node<K,V> ln, hn; // The head node in the slot is a chain head node if (fh >= 0) { // First calculate the current fh * n int runBit = fh & n; // Stores the lastRun that traverses the final position Node<K,V> lastRun = f; // Traversal linked list for (Node<K,V> p = f.next; p != null; p = p.next) { int b = p.hash & n; // If hash&n changes during traversal, runBit and lastRun need to be updated if (b != runBit) { runBit = b; lastRun = p; } } //If lastRun refers to a low-level linked list, make ln lastRun if (runBit == 0) { ln = lastRun; hn = null; } // If lastrun refers to a high-order linked list, make hn lastrun else { hn = lastRun; ln = null; } // Traverse the linked list, put the hash & n with 0 in the low-level linked list and those not with 0 in the high-level linked list // Loop out condition: current loop node= lastRun for (Node<K,V> p = f; p != lastRun; p = p.next) { int ph = p.hash; K pk = p.key; V pv = p.val; if ((ph & n) == 0) ln = new Node<K,V>(ph, pk, pv, ln); else hn = new Node<K,V>(ph, pk, pv, hn); } // The position of the low linked list remains unchanged setTabAt(nextTab, i, ln); // The position of the high-order linked list is: original position + n setTabAt(nextTab, i + n, hn); // Mark current bucket migrated setTabAt(tab, i, fwd); // If advance is true, return to the above for --i operation advance = true; } // The head node in the slot is a tree node else if (f instanceof TreeBin) { TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> lo = null, loTail = null; TreeNode<K,V> hi = null, hiTail = null; int lc = 0, hc = 0; for (Node<K,V> e = t.first; e != null; e = e.next) { int h = e.hash; TreeNode<K,V> p = new TreeNode<K,V> (h, e.key, e.val, null, null); if ((h & n) == 0) { if ((p.prev = loTail) == null) lo = p; else loTail.next = p; loTail = p; ++lc; } else { if ((p.prev = hiTail) == null) hi = p; else hiTail.next = p; hiTail = p; ++hc; } } ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc != 0) ? new TreeBin<K,V>(lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc != 0) ? new TreeBin<K,V>(hi) : t; setTabAt(nextTab, i, ln); setTabAt(nextTab, i + n, hn); setTabAt(tab, i, fwd); advance = true; } // The head node in the slot is a reserved placeholder node else if (f instanceof ReservationNode) throw new IllegalStateException("Recursive update"); } } } } }

The transfer() method is the core method for concurrent HashMap to perform capacity expansion. Its capacity expansion and transfer operation is actually similar to HashMap, which splits the original linked list into two linked lists.

However, there are many differences in implementation details. See the source code Notes for details.

4.11 resizeStamp method
/** * Returns the stamp bits for resizing a table of size n. * Must be negative when shifted left by RESIZE_STAMP_SHIFT. */ static final int resizeStamp(int n) { return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1)); }

The resizeStamp(int n) method can calculate stamp bits when a table of size n is expanded

5. Business methods

5.1 construction method
// Default construction method public ConcurrentHashMap() { } // Construction method of providing only initial capacity public ConcurrentHashMap(int initialCapacity) { this(initialCapacity, LOAD_FACTOR, 1); } // Provides the construction method of map public ConcurrentHashMap(Map<? extends K, ? extends V> m) { this.sizeCtl = DEFAULT_CAPACITY; putAll(m); } // Provides the construction method of default capacity and load factor public ConcurrentHashMap(int initialCapacity, float loadFactor) { this(initialCapacity, loadFactor, 1); } // Provides the construction method of default capacity, load factor and number of Concurrent update threads. public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); // If the initial capacity is smaller than the number of Concurrent update threads, assign a new value to it if (initialCapacity < concurrencyLevel) // Use at least as many bins initialCapacity = concurrencyLevel; // as estimated threads long size = (long)(1.0 + (long)initialCapacity / loadFactor); // cap is assigned as the maximum capacity or expansion threshold int cap = (size >= (long)MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)size); this.sizeCtl = cap; }
5.2 methods
// Count cell array private transient volatile CounterCell[] counterCells; public int size() { // Call sumCount() long n = sumCount(); return ((n < 0L) ? 0 : (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE : (int)n); } final long sumCount() { // Get count cell array CounterCell[] cs = counterCells; long sum = baseCount; if (cs != null) { // The values in all counting units are added up for (CounterCell c : cs) if (c != null) sum += c.value; } return sum; } // A very simple counting unit with only one volatile counter value @jdk.internal.vm.annotation.Contended // This annotation ensures that the object of the current class has exclusive cache lines static final class CounterCell { // Only constructors are provided, but get/set methods are not provided. That is, the value of value is determined during initialization and will not be changed later volatile long value; CounterCell(long x) { value = x; } }

The implementation of the size() method is to first obtain baseCount, which is the counter value obtained when there is no contention. Then the count values in the counting unit array are accumulated above. He has the following measures to ensure thread safety:

  • Set the value variable in the counterCells array and the CounterCell class to volatile.
  • The get/set method is not set for the value variable in the CounterCell class.

So how is the counter cells array created and initialized, and how is baseCount increased. Later, we will explain the source code of business methods that change size, such as put().

5.3 isEmpty method
public boolean isEmpty() { return sumCount() <= 0L; // ignore transient negative values }

See 5.2 for sumCount() method

5.4 get method
public V get(Object key) { Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek; // DP Hash int h = spread(key.hashCode()); if ((tab = table) != null // Table is not empty && (n = tab.length) > 0 // Table length is not 0 && (e = tabAt(tab, (n - 1) & h)) != null) {// The specified location is not null // The first position is the key to be found if ((eh = e.hash) == h) { if ((ek = e.key) == key || (ek != null && key.equals(ek))) return e.val; } else if (eh < 0)// The hash value of the current linked list header is less than 0, indicating that it is a special node // Call the find method of the special node e return (p = e.find(h, key)) != null ? p.val : null; // A normal node, normal linked list, normal traversal while ((e = e.next) != null) { if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek)))) return e.val; } } return null; }

Note that first, we calculate the hash position of the key to be searched in the hash table. Then do different processing according to the hash value of the found node.

  • If the hash value is the value to be found, it is returned directly.
  • If the hash value is less than 0, it means that the current node is a special node. Refer to 1.13 hash status parameters of special nodes. In this way, the find() method of special nodes will be called, such as the find() method of ForwardingNode class and TreeNode class.
  • If the hash value is greater than or equal to 0, traverse the current linked list.
5.5 containsKey method
public boolean containsKey(Object key) { return get(key) != null; }
5.6 containsValue method
public boolean containsValue(Object value) { if (value == null) throw new NullPointerException(); Node<K,V>[] t; if ((t = table) != null) { Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length); for (Node<K,V> p; (p = it.advance()) != null; ) { V v; if ((v = p.val) == value || (v != null && value.equals(v))) return true; } } return false; }

The Traverser class encapsulates the traversal logic of the containsValue method. The code is complex. The following table is not included here for the time being.

5.7 test method
public V put(K key, V value) { return putVal(key, value, false); } final V putVal(K key, V value, boolean onlyIfAbsent) { // Air judgment if (key == null || value == null) throw new NullPointerException(); // DP Hash int hash = spread(key.hashCode()); // Counter for current bucket int binCount = 0; // Spin insert node until successful for (Node<K,V>[] tab = table;;) { Node<K,V> f; int n, i, fh; K fk; V fv; // CASE1: if the table is empty, call the initialization method first if (tab == null || (n = tab.length) == 0) tab = initTable(); // CASE2: if the hash location node is empty, it is unlocked when inserting into the empty location else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { // Try to put the key value pair to put directly here if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value))) break;// sign out } // CASE3: if the hash value of the hash location node is - 1, it is a Forwarding Node. Call helperTransfer() else if ((fh = f.hash) == MOVED) // Assist in transferring data and getting new arrays tab = helpTransfer(tab, f); // CASE4: if onlyIfAbsent is true and the header node is the required node, return it directly else if (onlyIfAbsent && fh == hash && ((fk = f.key) == key || (fk != null && key.equals(fk))) && (fv = f.val) != null) return fv; // CASE5: the specified location was found and is not empty (hash conflict occurred). else { V oldVal = null; synchronized (f) {// Lock the current node (chain header) if (tabAt(tab, i) == f) {// Then judge whether f is the head node to prevent it from being modified by other threads // if - is not a special node if (fh >= 0) { binCount = 1; for (Node<K,V> e = f;; ++binCount) {// Note that the counter is incremented during traversal K ek; // In the process of traversal, the value you want to insert is found. It will be returned according to the situation if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { oldVal = e.val; if (!onlyIfAbsent) e.val = value; break; } // If the tail is reached, a new node built by the current key value is inserted Node<K,V> pred = e; if ((e = e.next) == null) { pred.next = new Node<K,V>(hash, key, value); break; } } } // elseIf - is a tree node else if (f instanceof TreeBin) { Node<K,V> p; binCount = 2; if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key, value)) != null) { oldVal = p.val; if (!onlyIfAbsent) p.val = value; } } // else - if it is a reserved node else if (f instanceof ReservationNode) throw new IllegalStateException("Recursive update"); } } // After the insertion, check whether you need to treelize the current linked list if (binCount != 0) { if (binCount >= TREEIFY_THRESHOLD) treeifyBin(tab, i); if (oldVal != null) return oldVal; break; } } } // Counter plus one addCount(1L, binCount); // Return null return null; }

See the notes for the specific logic, which are explained in detail.

The putVal method keeps spinning with a for loop and keeps trying to insert the required key value pair. There are the following cases in the loop, which are embodied as five branches in the if else block.

  • The table is empty. Call the initialization method.
  • Hash position is empty. put directly without lock.
  • The hash location is a ForwardingNode. Call helpTransfer.
  • The hash position header is the current key, and onlyIfAbsent is true, which is returned directly.
  • The hash position is not empty, indicating a hash conflict.

Pay attention to the update of binCount during traversal. Finally, add one to the object with addCount() and use binCount as the check parameter.

5.8 remove method
public V remove(Object key) { return replaceNode(key, null, null); } final V replaceNode(Object key, V value, Object cv) { int hash = spread(key.hashCode()); // spin for (Node<K,V>[] tab = table;;) { Node<K,V> f; int n, i, fh; // CASE1: cases where you can exit directly: the array is empty or the hash result position is null. if (tab == null || (n = tab.length) == 0 || (f = tabAt(tab, i = (n - 1) & hash)) == null) break; // CASE2: the node is moving. Help to move else if ((fh = f.hash) == MOVED) tab = helpTransfer(tab, f); // CASE3: hash conflict occurs. Look it up in the linked list else { V oldVal = null; boolean validated = false; // Lock the head node synchronized (f) {// The internal specific logic will not be repeated, which is similar to the put method above if (tabAt(tab, i) == f) { if (fh >= 0) { validated = true; // e represents the current loop processing node, and pred represents the previous node of the current loop node for (Node<K,V> e = f, pred = null;;) { K ek; // find if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { V ev = e.val; if (cv == null || cv == ev || (ev != null && cv.equals(ev))) { oldVal = ev; if (value != null) e.val = value; else if (pred != null) pred.next = e.next; else setTabAt(tab, i, e.next); } break; } pred = e; if ((e = e.next) == null) break; } } else if (f instanceof TreeBin) { validated = true; TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> r, p; if ((r = t.root) != null && (p = r.findTreeNode(hash, key, null)) != null) { V pv = p.val; if (cv == null || cv == pv || (pv != null && cv.equals(pv))) { oldVal = pv; if (value != null) p.val = value; else if (t.removeTreeNode(p)) setTabAt(tab, i, untreeify(t.first)); } } } else if (f instanceof ReservationNode) throw new IllegalStateException("Recursive update"); } } if (validated) { if (oldVal != null) { // If it is a deletion, the number of elements is reduced by one if (value == null) addCount(-1L, -1); return oldVal; } break; } } } return null; }

The key is to lock the linked list header to achieve thread safety. Just look at the source code directly.

26 November 2021, 05:11 | Views: 4825

Add new comment

For adding a comment, please log in
or create account

0 comments