HashMap(JDK1.8) source code analysis

Article directory
brief introduction
data structure
Before JDK1.8
After JDK1.8
JDK1.7 VS JDK1.8 comparison
Inheritance diagram
Member variables
Construction method
Static inner class
Core method
hash() algorithm
put() method
resize() method
treeifyBin() method
get() method
remove() method
brief introduction
Before JDK 1.8, HashMap was implemented by array + linked list, that is, using linked list to handle conflicts. Nodes with the same hash value were stored in a linked list. However, when there are many elements in a bucket, that is, there are many elements with equal hash value, the efficiency of searching by key value in turn is low. In JDK 1.8, in order to solve the problem of too frequent hash collisions, HashMap uses array + linked list + red black tree. When the length of linked list exceeds the threshold (8), the linked list (query time complexity is O(n)) is transformed into red black tree (time complexity is O(lg n)), which greatly improves the query efficiency. The following are all hashmaps in JDK 1.8 that are not specifically mentioned.

HashMap can be said to be the most used Map collection. It has the following characteristics:

Key is not repeatable, value is repeatable
Underlying hash table
Thread unsafe
Allow key to be null and value to be null
data structure
In Java, there are two simple data structures to save data: array and linked list. The characteristics of array are: easy to address, difficult to insert and delete; the characteristics of linked list are: difficult to address, but easy to insert and delete; so we combine array and linked list together, play their respective advantages, and use a method called zipper method to solve hash conflict.

Before JDK1.8
The zipper method was used before JDK 1.8. Zipper method: combine linked list and array. That is to say, to create a linked list array, each cell in the array is a linked list. If a hash conflict is encountered, the conflicting value can be added to the linked list.

After JDK1.8
Compared with the previous version, jdk1.8 has great changes in solving hash conflicts. When the length of the linked list is greater than the threshold value (8 by default), the linked list will be converted into a red black tree to reduce the search time.

JDK1.7 VS JDK1.8 comparison
JDK1.8 mainly solves or optimizes the following problems:

resize expansion optimization
Red black tree is introduced in order to avoid the query efficiency being affected by too long single link list. For red black tree algorithm, please refer to
It solves the problem of multi-threaded dead cycle, but it is still non thread safe. Multi thread may cause data loss.
Different JDK 1.7 JDK 1.8
Storage structure array + linked list array + linked list + red black tree
Initialization method: individual function: inflateTable() is directly integrated into resize() function
hash calculation method: disturbance processing = 9 disturbances = 4 bit operations + 5 XOR operations; disturbance processing = 2 disturbances = 1 bit operation + 1 XOR operation
Rules for storing data: store array when there is no conflict; store linked list when there is no conflict; store array when conflict & linked list length < 8: store single linked list; Conflict & linked list length > 8: tree and store red black tree
Insert data method: head inserting method (first move the data in the original position to the last 1 bit, then insert the data to the position) tail inserting method (directly insert to the end of the list / red black tree)
The calculation method of storage location after capacity expansion is all in accordance with the original method (i.e. hashcode - > disturbance function - > (H & length-1)) according to the rule after capacity expansion (i.e. the location after capacity expansion = original location or original location + old capacity)
Inheritance diagram

HashMap inherits abstract class AbstractMap and implements Map interface. In addition, it also implements two identity interfaces, neither of which has any method, only as an identity to indicate that the implementation class has a certain function. Clonable indicates that the implementation class supports cloning, while java.io.Serializable indicates that serialization is supported.

Member variables

//Default initialization Node array capacity 16
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
//Maximum array capacity
static final int MAXIMUM_CAPACITY = 1 << 30;
//Default load factor 0.75
static final float DEFAULT_LOAD_FACTOR = 0.75f;
//The critical value of the transformation from linked list to red black tree
static final int TREEIFY_THRESHOLD = 8;
//The critical value of the transformation from red black tree to linked list
static final int UNTREEIFY_THRESHOLD = 6;
//Minimum capacity of bucket to tree structure
static final int MIN_TREEIFY_CAPACITY = 64;
//The number of times the structure of a HashMap has been modified. Structure modification refers to changing the number of mappings in a HashMap or modifying its internal structure in other ways (for example, rehash modification). This field is used to quickly generate iterators on collection views.
transient int modCount;  
//The critical value of the next expansion of Node array, the first is 16 * 0.75 = 12 (capacity * load factor)
int threshold;
//Load factor
final float loadFactor;
//Number of key value pairs contained in the map
transient int size;
//Table data, that is, Node key value pair array, Node is a one-way linked list, it implements the Map.Entry interface, which is always a power of 2
//Node < K, V > is the internal class of HashMap, which implements the map. Entry < K, V > interface. The key value stored in the hash bucket array of HashMap is node < K, V >. Class maintains a next pointer to the next element in the linked list. It is worth noting that when the number of elements in the linked list exceeds treeify? Treehold, the HashMap will convert the linked list into a red black tree. At this time, the elements of this subscript will become treenode < K, V >, inherited from LinkedHashMap. Entry < K, V >, and LinkedHashMap. Entry < K, V > is a subclass of node < K, V >. Therefore, the underlying array data type of HashMap is node < K, V >.
transient Node<K,V>[] table;
//A set of specific elements, which can be used to traverse the map set
transient Set<Map.Entry<K,V>> entrySet;

Relationship among capacity, threshold and loadFactor: capacity of capacity table, default capacity is 16
threshold table critical value for capacity expansion
loadFactor load factor: generally, threshold = capacity * loadFactor. The default load factor of 0.75 is a balanced choice for space and time efficiency. It is recommended that you do not modify it.
Construction method

//Initialization capacity and load factor
public HashMap(int initialCapacity, float loadFactor) {
    //Determine the capacity of initialization array
    if (initialCapacity < 0)
        throw new IllegalArgumentException("Illegal initial capacity: " +
    if (initialCapacity > MAXIMUM_CAPACITY)
        initialCapacity = MAXIMUM_CAPACITY;
    //Determine whether the initialized load factor size and floating point type
    if (loadFactor <= 0 || Float.isNaN(loadFactor))
        throw new IllegalArgumentException("Illegal load factor: " +
    //Initialize load factor
    this.loadFactor = loadFactor;
    this.threshold = tableSizeFor(initialCapacity);

//Initialize capacity
public HashMap(int initialCapacity) {  
    this(initialCapacity, DEFAULT_LOAD_FACTOR);  

//Default construction method
public HashMap() {  
    this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted  
//Map the value of another map to the current new map
public HashMap(Map<? extends K, ? extends V> m) {  
    this.loadFactor = DEFAULT_LOAD_FACTOR;  
    putMapEntries(m, false);  

There are two main forms:

Define the initial capacity size (table array size, default value is 16) and the form of load factor (default value is 0.75)

Copy other forms of HashMap directly, not discussed here

It is worth noting that when we define the initial capacity of the HashMap, the constructor does not directly take the value we defined as the capacity of the HashMap, but calls the method tableSizeFor as a parameter, and then takes the return value as the initial capacity of the HashMap

tableSizeFor() method description

//Table corner calculation and table.length in HashMap are always powers of 2, that is, 2 ^ n
//Returns the smallest power value greater than initialCapacity
static final int tableSizeFor(int cap) {
    int n = cap - 1;
    n |= n >>> 1;
    n |= n >>> 2;
    n |= n >>> 4;
    n |= n >>> 8;
    n |= n >>> 16;
    return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;

Static inner class
HashMap encapsulates hash, key, value and next to a static internal class Node. It implements the map. Entry < K, V > interface.

static class Node<K,V> implements Map.Entry<K,V> {
    // HashMap determines the location of the record based on the hash value
    final int hash;
    // key of node
    final K key;
    // value of node
    V value;
    // Next node in the list
    Node<K,V> next;

    // Construction method
    Node(int hash, K key, V value, Node<K,V> next) {
        this.hash = hash;
        this.key = key;
        this.value = value;
        this.next = next;

    // Return the key corresponding to node
    public final K getKey()        { return key; }
    // Return the value of node
    public final V getValue()      { return value; }
    public final String toString() { return key + "=" + value; }

    public final int hashCode() {
        return Objects.hashCode(key) ^ Objects.hashCode(value);

    public final V setValue(V newValue) {
        V oldValue = value;
        value = newValue;
        return oldValue;

    //Function: judge whether two entries are equal, and return true only when both key and value are equal
    public final boolean equals(Object o) {
        if (o == this)
            return true;
        if (o instanceof Map.Entry) {
            Map.Entry<?,?> e = (Map.Entry<?,?>)o;
            if (Objects.equals(key, e.getKey()) &&
                Objects.equals(value, e.getValue()))
                return true;
        return false;

It inherits from LinkedHashMap. Entry < K, V >, and LinkedHashMap. Entry < K, V > is a subclass of node < K, V >. Therefore, the underlying array data type of HashMap is node < K, V >

  * Red black tree node implementation class: inherited from LinkedHashMap. Entry < K, V > class
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {  

    // Attribute = parent node, left subtree, right subtree, delete secondary node + Color
    TreeNode<K,V> parent;  
    TreeNode<K,V> left;   
    TreeNode<K,V> right;
    TreeNode<K,V> prev;   
    boolean red;   

    // Constructor
    TreeNode(int hash, K key, V val, Node<K,V> next) {  
        super(hash, key, val, next);  

    // Returns the root node of the current node
    final TreeNode<K,V> root() {  
        for (TreeNode<K,V> r = this, p;;) {  
            if ((p = r.parent) == null)  
                return r;  
            r = p;  

Core method
hash() algorithm
Before JDK 1.8, the bottom layer of HashMap was the combination of arrays and linked lists. HashMap obtains the hash value through the hashCode of the key after being processed by the disturbance function, and then judges the storage location of the current element (where n refers to the length of the array) through (n - 1) - hash. If there is an element in the current location, it judges the hash value of the element and the element to be stored as well as the key Whether they are the same, if they are the same, they can directly cover them. If they are different, they can solve the conflict by zipper method.

The so-called disturbance function refers to the hash method of HashMap. The hash method is used to prevent some hashCode() methods from being implemented poorly, in other words, collision can be reduced after using perturbation functions.

Source code of the hash method of JDK 1.8 HashMap:

Compared with JDK 1.7, the hash method of JDK 1.8 is more simplified, but its principle remains unchanged.

// Take the hashCode value, high-order operation and modulus operation of key
// In the implementation of JDK 1.8, the algorithm of high-order operation is optimized,
// Through the high 16 bits exclusive or low 16 bits of hashCode(): (H = k.hashCode()) ^ (H > > > 16),
// It is mainly considered from speed, efficiency and quality. This can be done when the length of the array table is relatively small,
// It can also ensure that high Bit and low Bit are involved in the calculation of Hash without too much overhead.
static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);

(1) First, get the hashCode() value of the object, then move the hashCode value to the right by 16 bits, and then do exclusive or operation with the original hashCode to return the result. (H > > > 16, in JDK 1.8, the algorithm of high-order operation is optimized and zero expansion is used. No matter positive or negative, zero is inserted in the high-order).

(2) In the putVal source code, get the position of the object's key in hashmap through (n-1) & hash. (where the value of hash is the value obtained in (1)) where n represents the length of hash bucket array, and the length is the nth power of 2, so (n-1) - hash is equivalent to hash%n. Because the & operation is more efficient than the% operation.

final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                boolean evict) {

    if ((p = tab[i = (n - 1) & hash]) == null)//Get location
        tab[i] = newNode(hash, key, value, null);

tab is table, n is the capacity of map set, and hash is the return value of the above method. Because the size is not specified when the map set is declared, or a map object with large capacity is created when it is initialized, the hash algorithm based on the capacity size and key value will only calculate the low order at the beginning. Although the high order of the binary capacity is 0 at the beginning, the high order of the binary key usually has a value, so the key will be set in the hash method first The 16 bit right shift of the hashCode of is XOR with itself, so that the high bit can also participate in the hash, which greatly reduces the collision rate.

For example, n is the length of table.

Compare the hash method source code of HashMap in JDK 1.7.

static int hash(int h) {
    // This function ensures that hashCodes that differ only by
    // constant multiples at each bit position have a bounded
    // number of collisions (approximately 8 at default load factor).
    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);

Compared with the hash method of JDK 1.8, the performance of the hash method of JDK 1.7 will be slightly worse, because it has been disturbed four times after all.

//Store all elements of m into this HashMap instance
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
    int s = m.size();
    if (s > 0) {
        // Determine whether the table has been initialized
        if (table == null) { // pre-size
            // Uninitialized, s is the actual number of elements in m
            float ft = ((float)s / loadFactor) + 1.0F;
            int t = ((ft < (float)MAXIMUM_CAPACITY) ?
                    (int)ft : MAXIMUM_CAPACITY);
            // If the calculated t is greater than the threshold value, the threshold value is initialized
            if (t > threshold)
                threshold = tableSizeFor(t);
        // It has been initialized and the number of m elements is greater than the threshold value for capacity expansion
        else if (s > threshold)
        // Add all elements in m to the HashMap
        for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
            K key = e.getKey();
            V value = e.getValue();
            putVal(hash(key), key, value, false, evict);

put() method
When we put, we first calculate the hash value of the key. Here we call the hash method. The hash method actually makes the key.hashCode() and key.hashCode() > > 16 carry out exclusive or operation. The high 16bit complements 0, and a number and 0 exclusive or do not change. Therefore, the function of the hash function roughly is: the high 16bit does not change, the low 16bit and the high 16bit do an exclusive or, in order to reduce collisions. According to the function annotation, because the bucket array size is the power of 2, calculate the subscript index = (table. Length - 1) & hash. If you do not do hash processing, only a few low bits are effective for hash In order to reduce the hash collision, the designer uses high 16bit and low 16bit XOR to reduce the collision simply after considering the speed, function and quality. Moreover, in JDK8, the tree structure of complexity O (logn) is used to improve the performance under the collision.

Flow chart of putVal method implementation

public V put(K key, V value) {
    return putVal(hash(key), key, value, false, true);

static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);

//Implement Map.put and related methods
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
    Node<K,V>[] tab; Node<K,V> p; int n, i;
    // Step 1: create if tab is empty
    // table is uninitialized or 0 in length for capacity expansion
    if ((tab = table) == null || (n = tab.length) == 0)
        n = (tab = resize()).length;
    // Step 2: calculate index and handle null
    // (n - 1) & hash determines the bucket in which the element is stored. If the bucket is empty, the newly generated node is placed in the bucket (at this time, the node is placed in the array)
    if ((p = tab[i = (n - 1) & hash]) == null)
        tab[i] = newNode(hash, key, value, null);
    // Element already exists in bucket
    else {
        Node<K,V> e; K k;
        // Step 3: the node key exists and directly covers the value
        // Compare the hash value of the first element (node in the array) in the bucket to be equal, and the key to be equal
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
                // Assign the first element to e and record with e
                e = p;
        // Step 4: judge the chain as a red black tree
        // The hash value is not equal, that is, the key is not equal; it is a red black tree node
        // If the current element type is TreeNode, which means red black tree, putTreeVal returns the node to be stored, e may be null
        else if (p instanceof TreeNode)
            // In the tree
            e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
        // Step ⑤: the chain is a linked list
        // Is a linked list node
        else {
            // Insert node at the end of linked list
            for (int binCount = 0; ; ++binCount) {
                // To the end of the list
                //Judge whether the end pointer of the list is empty
                if ((e = p.next) == null) {
                    // Insert a new node at the end
                    p.next = newNode(hash, key, value, null);
                    //Determine whether the length of the list reaches the critical value of transforming the red black tree, the critical value is 8
                    if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                        //Chain structure to tree structure
                        treeifyBin(tab, hash);
                    // Jump out of circulation
                // Determine whether the key value of the node in the linked list is equal to the key value of the inserted element
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    // Equal, out of cycle
                // It is used to traverse the linked list in the bucket. Combined with the previous e = p.next, it can traverse the linked list
                p = e;
        //When judging that the current key already exists and another same hash value and key value, the new value will be returned
        if (e != null) { 
            // Record the value of e
            V oldValue = e.value;
            // onlyIfAbsent is false or the old value is null
            if (!onlyIfAbsent || oldValue == null)
                //Replace old value with new value
                e.value = value;
            // Callback after access
            // Return old value
            return oldValue;
    // Structural modification
    // Step 6: expand the capacity beyond the maximum capacity
    // Expand if the actual size is greater than the threshold
    if (++size > threshold)
    // Callback after insertion
    return null;

① . judge whether the key value is empty or null for the array table[i]. Otherwise, execute resize() to expand the capacity;

② . calculate the hash value to the inserted array index I according to the key value key. If table[i]==null, directly create a new node to add, and turn to ⑥. If table[i] is not empty, turn to ③;

③ . judge whether the first element of table[i] is the same as key. If the same directly covers value, otherwise, turn to ④. The same here refers to hashCode and equals;

④ . judge whether table[i] is a treeNode, that is, whether table[i] is a red black tree. If it is a red black tree, insert key value pairs directly in the tree, otherwise, turn to ⑤;

⑤ . traverse table[i] to determine whether the length of the linked list is greater than 8. If it is greater than 8, convert the linked list to a red black tree and perform the insertion operation in the red black tree. Otherwise, insert the linked list. If the key has been found to cover the value directly during the traverse process, it is enough;

⑥ . after inserting successfully, judge whether the actual key value pair quantity size exceeds the maximum capacity threshold. If it exceeds the maximum capacity threshold, expand the capacity.

resize() method
① In jdk1.8, resize method is to call resize method when the key value in hashmap is greater than the threshold value or when initialization;

② . every time you expand, it's twice as large;

③ . after extension, the Node object is either in its original position or moved to a position twice the original offset.

In putVal(), we see that the resize() method is used twice in this function. The resize() method indicates that it will be expanded during the first initialization, or when the actual size of the array is greater than its critical value (the first time is 12), At this time, the elements on the bucket will be redistributed at the same time of capacity expansion. This is also an optimization of JDK 1.8. In 1.7, after capacity expansion, the hash value needs to be recalculated and distributed according to the hash value, but in 1.8, it is judged according to the location of the same bucket (e.hash& Oldcap) is 0. After the hash allocation is performed again, the position of the element will either stay at the original position or move to the original position + the increased array size

final Node<K,V>[] resize() {
    Node<K,V>[] oldTab = table;//oldTab points to the hash bucket array
    int oldCap = (oldTab == null) ? 0 : oldTab.length;
    int oldThr = threshold;
    int newCap, newThr = 0;
    if (oldCap > 0) {//If oldCap is not empty, the hash bucket array is not empty
        if (oldCap >= MAXIMUM_CAPACITY) {//If it is greater than the maximum capacity, it will be assigned as the maximum threshold of the integer
            threshold = Integer.MAX_VALUE;
            return oldTab;//Return
        }//If the length of the current hash bucket array is still less than the maximum capacity after resizing and the oldCap is greater than the default value of 16
        else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            newThr = oldThr << 1; // double threshold double expansion threshold
    // The old capacity is 0, but the threshold is greater than zero, which means that cap is passed in by the parameter construction. The threshold has been initialized to the n-power of the minimum 2
    // Assign this value directly to the new capacity
    else if (oldThr > 0) // initial capacity was placed in threshold
        newCap = oldThr;
    // map created by nonparametric construction gives default capacity and threshold 16, 16*0.75
    else {               // zero initial threshold signifies using defaults
    // New threshold = new cap * 0.75
    if (newThr == 0) {
        float ft = (float)newCap * loadFactor;
        newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    threshold = newThr;
    // Assign the new array length to the current member variable table
        Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];//New hash bucket array
    table = newTab;//Copy the value of the new array to the old hash bucket array
    // If the original array is not initialized, then the initialization of resize is over. Otherwise, it will enter the expansion element rearrangement logic to make it evenly distributed
    if (oldTab != null) {
        // Traverse all bucket subscripts of the new array
        for (int j = 0; j < oldCap; ++j) {
            Node<K,V> e;
            if ((e = oldTab[j]) != null) {
                // The bucket subscript of the old array is assigned to the temporary variable e, and the reference in the old array is removed. Otherwise, the array cannot be recycled by GC
                oldTab[j] = null;
                // If e.next==null, it means that there is only one element in the bucket, and there is no linked list or red black tree
                if (e.next == null)
                    // Add the element to the new array with the same hash mapping algorithm
                    newTab[e.hash & (newCap - 1)] = e;
                // If e is TreeNode and e.next!=null, the rearrangement of elements in the tree is handled
                else if (e instanceof TreeNode)
                    ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                // E is the head of the list and e.next!=null, then handle the element rearrangement in the list
                else { // preserve order
                    // Lohead and lotail represent that the subscript does not need to be changed after capacity expansion, see Note 1
                    Node<K,V> loHead = null, loTail = null;
                    // Hihead and hitail represent the transformation subscript after capacity expansion, see Note 1
                    Node<K,V> hiHead = null, hiTail = null;
                    Node<K,V> next;
                    // Traversing linked list
                    do {             
                        next = e.next;
                        if ((e.hash & oldCap) == 0) {
                            if (loTail == null)
                                // Initialization head points to the current element e of the linked list. e is not necessarily the first element of the linked list. After initialization, loHead
                                // Represents the header element of a linked list whose subscript remains unchanged
                                loHead = e;
                                // loTail.next points to the current e
                                loTail.next = e;
                            // loTail points to the current element e
                            // After initialization, loTail and loHead point to the same memory, so when loTail.next points to the next element,
                            // The next reference of the element in the underlying array also changes accordingly, resulting in lowHead.next.next
                            // Follow the loTail synchronization so that the lowHead can link to all elements belonging to the linked list.
                            loTail = e;                           
                        else {
                            if (hiTail == null)
                                // The initialization head points to the current element e of the linked list. After initialization, the hiHead represents the linked list head element whose subscript has been changed
                                hiHead = e;
                                hiTail.next = e;
                            hiTail = e;
                    } while ((e = next) != null);
                    // At the end of traversal, point tail to null, and put the chain header into the corresponding subscript of the new array to form a new mapping.
                    if (loTail != null) {
                        loTail.next = null;
                        newTab[j] = loHead;
                    if (hiTail != null) {
                        hiTail.next = null;
                        newTab[j + oldCap] = hiHead;
    return newTab;

treeifyBin() method
In putVal() method, we can see that when the length of the linked list is greater than the threshold value of treeifyBin(), treeifyBin() method will be called to convert the structure of the linked list into a red black tree structure, which is also the new optimized function point of JDK 1.8

In this method, the main works are as follows:

1. Judge whether the bucket is initialized or whether the number of elements in the bucket reaches the min? Treeify? Capacity threshold. If not, initialize or expand the bucket

2. If the above conditions are not met, it will be treelized. First, it will traverse the elements of the linked list in the bucket, and create the same tree node. Then it will create the head node of the tree according to the first element of the bucket, and establish the relationship based on it

final void treeifyBin(Node<K,V>[] tab, int hash) {
    int n, index; Node<K,V> e;
    if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
    //Start treeing
    else if ((e = tab[index = (n - 1) & hash]) != null) {
        TreeNode<K,V> hd = null, tl = null;
        //Loop the list element in bucket Node, and change the head element of the list to the head Node of the tree from the head Node of the list
        do {
            TreeNode<K,V> p = replacementTreeNode(e, null);
            if (tl == null)
                hd = p;
            else {
                //When the head node of the tree is not empty
                p.prev = tl;
                tl.next = p;
            tl = p;
        } while ((e = e.next) != null);
        //Connect the elements in the bucket to the head node of the tree
        if ((tab[index] = hd) != null)

get() method
Note: HashMap also does not directly provide the getNode interface for users to call, but provides the get method, which obtains elements through getNode.

public V get(Object key) {
    Node<k,v> e;
    return (e = getNode(hash(key), key)) == null ? null : e.value;
final Node<K,V> getNode(int hash, Object key) {
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    // Table has been initialized, and the length is greater than 0. It is not empty to find items in table according to hash
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (first = tab[(n - 1) & hash]) != null) {
        // The first item (array element) in the bucket is equal
        if (first.hash == hash && // always check first node
            ((k = first.key) == key || (key != null && key.equals(k))))
            return first;
        // More than one node in the barrel
        if ((e = first.next) != null) {
            // Is the red black tree node
            if (first instanceof TreeNode)
                // Find in red black tree
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            // Otherwise, look up in the linked list
            do {
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    return e;
            } while ((e = e.next) != null);
    return null;

remove() method

* Delete the key value pair corresponding to the specified key from the HashMap, and return the value of the deleted key value pair
* If NULL is returned, the key may not exist, or the value corresponding to the key may be null
* If you want to determine whether the key exists, you can use the containsKey method
public V remove(Object key) {
    Node<K,V> e; // Define a node variable to store the node to be deleted (key value pair)
    return (e = removeNode(hash(key), key, null, false, true)) == null ?
        null : e.value; // Call the removeNode method

It can be found that the bottom layer of the remove method actually calls the removinode method to delete the key value pair node, and gets the value corresponding to the key according to the returned node object. Then we will analyze the code of the removinode method in detail

* The method is final and cannot be overwritten. Subclasses can add their own processing logic by implementing the afterNodeRemoval method (described in the parsing)
* @param hash key The hash value of is obtained through hash(key)
* @param key Key of the key value pair to be deleted
* @param value The value of the key value pair to be deleted. Whether the value is used as a condition for deletion depends on whether matchValue is true
* @param matchValue If it is true, it will be deleted when the value equals(value) of the key value pair corresponding to the key is true; otherwise, it does not care about the value
* @param movable Whether to move the node after deletion. If false, do not move
* @return Returns the deleted node object or null if no nodes are deleted
final Node<K,V> removeNode(int hash, Object key, Object value,
                            boolean matchValue, boolean movable) {
    Node<K,V>[] tab; Node<K,V> p; int n, index; // Declare node array, current node, array length, index value
     * If the node array tab is not empty, the array length n is greater than 0, and the node object p (the node is the root node of the tree or the first node of the linked list) located according to the hash is not empty
     * You need to traverse down from the node p to find the node object matching the key
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (p = tab[index = (n - 1) & hash]) != null) {
        Node<K,V> node = null, e; K k; V v; // Define the node object to return, and declare a temporary node variable, key variable and value variable
        // If the key of the current node is the same as the key, the current node is the node to be deleted and assigned to the node
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
            node = p;
         * This step indicates that the first node does not match, then check whether there is a next node
         * If there is no next node, it means that there is no hash collision in the node's location. If there is no next node, it means that there is no hash collision in the node's location. If there is no match, it will not have to be deleted. Finally, it will return null
         * If there is a next node, there is a hash collision in the array position. At this time, there may be a linked list or a red black tree
        else if ((e = p.next) != null) {
            // If the current node is of TreeNode type, it means that it is already a red black tree, then call getTreeNode method to find the node satisfying the conditions from the tree structure
            if (p instanceof TreeNode)
                node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
            // If it's not a tree node, it's a linked list. You only need to compare nodes one by one from the beginning to the end
            else {
                do {
                    // If the key of the e node is equal to the key, the e node is the node to be deleted, assign it to the node variable, and call up the cycle
                    if (e.hash == hash &&
                        ((k = e.key) == key ||
                            (key != null && key.equals(k)))) {
                        node = e;
                    // Come here, it means e doesn't match
                    p = e; // Point the current node p to E. this step is to make p store the parent node of E in the next cycle forever. If the next e matches, then p is the parent node of the node
                } while ((e = e.next) != null); // If e has the next node, then continue to match the next node. Until a node is matched, jump out or traverse all nodes in the list
         * If the node is not empty, the node to be deleted is matched according to the key
         * If you don't need to compare the value or you need to compare the value, but the value is the same
         * Then you can delete the node
        if (node != null && (!matchValue || (v = node.value) == value ||
                                (value != null && value.equals(v)))) {
            if (node instanceof TreeNode) // If the node is a TreeNode object, it means that the node exists in the red black tree structure. Call the removeTreeNode method (which resolves separately) to remove the node
                ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
            else if (node == p) // If the node is not a TreeNode object, node == p means the node is the first node
                tab[index] = node.next; // Since the first node is deleted, directly point the corresponding position of the node array to the second node
            else // If the node node is not the first node, then p is the parent node of the node. To delete a node, you only need to point the next node of P to the next node of the node to delete the node from the list
                p.next = node.next;
            ++modCount; // Increasing the number of changes to HashMap
            --size; // Number of elements of HashMap decreases
            afterNodeRemoval(node); // Call the afterNodeRemoval method, which has no implementation logic. The purpose is for the subclass to overwrite itself as needed
            return node;
    return null;

Four traversal methods of HashMap

//Four traversal methods of HashMap
public static void main(String[] args) {
    Map<String, String> map = new HashMap<String, String>();
    map.put("1", "value1");
    map.put("2", "value2");
    map.put("3", "value3");
    map.put("4", "value4");

    //The first is traversal through Map.entrySet, which is recommended, especially when the capacity is large
    System.out.println("adopt Map.entrySet ergodic key and value: ");
    for (Map.Entry<String, String> entry : map.entrySet()) {
        System.out.println("Key: " + entry.getKey() + " - Value: " + entry.getValue());

    //The second way is to use iterator to traverse through Map.entrySet
    System.out.println("\n adopt Map.entrySet Use iterator ergodic key and value: ");
    Iterator map1it = map.entrySet().iterator();
    while (map1it.hasNext()) {
        Map.Entry<String, String> entry = (Map.Entry<String, String>) map1it.next();
        System.out.println("Key: " + entry.getKey() + " - Value: " + entry.getValue());

    //The third is traversal through Map.keySet, with secondary value
    System.out.println("\n adopt Map.keySet ergodic key and value: ");
    for (String key : map.keySet()) {
        System.out.println("Key: " + key + " - Value: " + map.get(key));

    //The fourth is traversal through Map.values()
    System.out.println("\n adopt Map.values()Traverse all value,But not traversal key: ");
    for (String v : map.values()) {
        System.out.println("The value is " + v);


36 original articles published, 161 praised, 480000 visitors+
His message board follow

Tags: JDK Java Attribute less

Posted on Mon, 17 Feb 2020 22:15:52 -0500 by the-Jerry