Binary tree
Linked lists and arrays are linear structures, and trees are nonlinear structures. Tree is a hierarchical structure defined by branching relationship. Social kinship and organization chart can be represented vividly by trees.
1 Definition and structure of tree
1.1 definition of tree
Tree is n ( n ≥ 0 ) n\;(n≥0) A finite set of n(n ≥ 0) elements. The tree is defined recursively. Any tree is composed of a root node and multiple subtrees, and each subtree is also composed of a root node and 0 or more subtrees.
- Root node: each tree has only one root node. The root node is the top node in the tree. The root node has no precursor node.
- Subtree: the root node is divided into M ( M ≥ 0 ) M\;(M≥0) M(M ≥ 0) disjoint sets, and each set is a subtree similar to the tree structure.
As shown in the right figure,
- Root node A A There are three subtrees below A, which are represented by nodes respectively B B B, C C C, D D D is the root node.
- B B Under node B, there are two E E E, F F F is the subtree of the root node.
- node F F F can be regarded as a tree with only root nodes, and its subtree is empty.
Any tree can be divided into roots and subtrees. It is called a tree because its structure looks like an upside down tree with roots up and leaves down.
1.2 related concepts of tree
The basic concepts of tree structure are listed below, and the relationship of each node is also expressed by kinship.
name | definition |
---|---|
Degree of node | The number of subtrees owned by a node is the number of child nodes, which is the degree of the node |
leaf | A node with a degree of 0, that is, a node without child nodes, that is, the lowest node in the whole tree, also known as the terminal node |
Branch node | Nodes with degree not 0, that is, nodes with child nodes, or non terminal nodes, internal nodes other than root nodes |
Child node | The root node of the subtree of a node, that is, the next node of a node |
Parent node | If the node contains child nodes, the node is the parent node of the child node |
Sibling node | The child nodes belonging to the same parent node are brother nodes to each other |
Degree of tree | The maximum degree of each node in the tree is called the degree of the tree, which can be regarded as the width of the tree |
Hierarchy of nodes | Starting from the root, the root node is layer 1, and the child nodes of the root are layer 2, and so on |
Tree height | The maximum value of the hierarchy of each node in the tree is called the height of the tree, which can be regarded as the depth of the tree |
Cousin node | A node whose parent node is at the same level, that is, its parent node is a child node of the same node |
Ancestor node | All nodes on the branch from the root node to the node are the ancestors of the node |
Descendant node | In contrast to ancestors, all nodes in the subtree with the ancestor node as the root are the descendants of the ancestor node |
forest | The collection of all disjoint trees is called forest, and all subtrees of a node are a forest |
It should be noted that sibling nodes do not include cousins, and only nodes of the same parent node are sibling nodes. The default root node of node hierarchy is layer 1, and sometimes it is represented by layer 0, but the second kind is not good to represent the hierarchy of empty tree.
1.3 representation of tree
There are many ways to define the structure of a tree. The key is how to represent the relationship between adjacent nodes.
Other representations
Child representation, if the degree of the tree is known N N N. We can define such a structure,
struct TreeNode { TNDataType data; struct Node* subs[N]; };
Each node stores node data and an array of pointers to all its child nodes. The degree of the tree is known, so sub [n] is enough to store, but inevitably it will waste space.
struct TreeNode { TNDataType data; SeqList sl;//Sequential table storage }; typedef struct TreeNode* SLDataTypde;
To solve the problem of wasting space and unknown degree of tree, we can use linear table instead of static array to store the pointers of child nodes. But the disadvantage is that the structure is too complex.
Parent representation, where the node stores its own data and the subscript of the parent node. The structure array is used to store the node information, and traversing the array is traversing the binary tree.
struct TreeNode { TNDataTypde data; int parenti; };
Child brother representation
The above methods have their own advantages and disadvantages. The best way to represent the tree structure is the left child right brother representation.
struct TreeNode { //Data domain TNDataType data; //Pointer field struct TreeNode* firstChild; struct TreeNode* nextBrother; };
There are only two pointers in the pointer field of the node
- firstChild points to the first child of the node,
- The next brother points to the first sibling node to the right of the child node.
First layer, root node A A A. No sibling node.
Second layer, node A A The first child node of A is node B B B. Its sibling node is node C C C.
The third layer, node B B The first child node of B is node D D D. Its sibling node is node E E E, F F F. node C C The child node of C is G G G.
The fourth layer, node D D D no child node, brother node E E E has child nodes as nodes H H H, H H The sibling node of H is a node I I I. node F F F, G G G has no child nodes.
As long as the root node is determined, all other nodes can be found from the pointer of their parent node or brother node. If there is no pointer, it will be null. This method does not need to determine the degree of the tree N N N. There is no need to use linear table storage, the structure is not complex and does not waste space. It is the optimal representation of tree structure.
The most classic application of tree in computer is file management system, that is, directory tree. When a folder is opened, a series of subfolders will pop up, which is more similar to finding the child node first and then its brother node.
2 Definition and structure of binary tree
Tree structure is not commonly used in computers, just for understanding. For storing and managing data, the most common is binary tree.
2.1 definition of binary tree
Binary tree is also a finite set of nodes, which can be empty. Each node can have no subtree or one subtree, but there are at most two subtrees, which are called left subtree and right subtree respectively. As shown in the figure:
- There are no nodes with a degree greater than 2 in a binary tree,
Similarly, the maximum degree of binary tree is 2. When the degree is 0, it is an empty tree or has only root nodes. When the degree is 1, it is a linear structure. When the degree is 2, the node may have two subtrees.
- The subtrees of nodes in a binary tree can be divided into left and right, and the order can not be reversed.
Any binary tree is composed of the following situations:
Special binary tree
- Full binary tree: all leaf nodes are in the last layer, or all branch nodes have two subtrees, or the number of nodes in each layer reaches the maximum. Such a number is a binary tree.
Suppose the number of layers of a full binary tree K K K. The first K K The number of nodes in layer K is 2 k − 1 2^{k-1} 2k − 1, the total number of nodes is 2 K − 1 2^K-1 2K−1 . If the total number of known nodes is N N N. The height of the tree is l o g 2 ( N + 1 ) log_2(N+1) log2(N+1).
- Complete binary tree: the front of a complete binary tree n − 1 n-1 The n − 1 layer is a full binary tree. Although the last layer can be dissatisfied, it is continuous from left to right.
Full binary tree is a special complete binary tree.
Properties of binary tree
- The second of nonempty binary tree i i At most on layer i 2 i − 1 2^{i-1} 2i − 1 node.
- Depth is k k k, the maximum number of nodes is 2 h − 1 2^h-1 2h−1 .
- For any binary tree, it is assumed that the number of leaf nodes is greater than the number of branch nodes with degree 2 1 1 1, i.e n 0 = n 2 + 1 n_0=n_2+1 n0=n2+1.
The characteristic of binary tree is that every time a branch node is added, a leaf node is bound to be added.
- The number of nodes of complete binary tree with degree 1 is only 0 0 0 and 1 1 1 there are two possibilities.
- If the total number of full binary tree nodes is N N N. The height of the tree is h = l o g 2 ( N + 1 ) h=log_2(N+1) h=log2(N+1).
2.2 structure of binary tree
The addition, deletion, search and modification of ordinary binary tree is meaningless. It is more about learning the control of binary tree structure. The purpose is to lay a solid foundation for later learning and searching binary tree, AVL tree and red black tree.
Sequential storage structure
Sequential storage refers to array storage. It is stored in the array layer by layer from the root and from left to right. General arrays are only suitable for representing complete binary trees.
Some trees without branches and leaves are stored in an array, which will cause a waste of space. If space is not wasted, it is not good to represent the structure of the tree regularly.
More importantly, the subscript can be used to calculate the parent-child nodes of nodes. As shown in the figure:
l e f t C h i l d = p a r e n t ∗ 2 + 1 r i g h t C h i l d = p a r e n t ∗ 2 + 2 leftChild=parent*2+1 \\rightChild=parent*2+2 leftChild=parent∗2+1rightChild=parent∗2+2
p a r e n t = ( c h i l d − 1 ) / 2 parent=(child-1)\;/\;2 parent=(child−1)/2
- A node is known to find its child node subscript. The left child node subscript is the node subscript multiplied by 2 + 1, and the right child node subscript is the node subscript multiplied by 2 + 2.
- If a node is known to find its parent node subscript, the child node subscript - 1 or - 2 is divided by 2. But even numbers of - 1 or - 2 are consistent.
Linked Storage Structure
Using linked list to represent binary tree is more intuitive. There are usually two schemes, one is binary linked list and the other is trigeminal linked list. The binary linked list stores the data field and the left and right pointer field, and the Trident stores one more parent node pointer.
At present, data structures are generally binary chains, and trigeminal chains are used in high-order data structures such as red and black trees. At present, only for understanding.
// Binary chain struct BinaryTreeNode { struct BinTreeNode* leftChild; struct BinTreeNode* rightChild; BTDataType _data; }; // Trigeminal chain struct BinaryTreeNode { struct BinTreeNode* parentChild; struct BinTreeNode* leftChild; struct BinTreeNode* _pRight; BTDataType _data; };
3 sequential structure of binary tree
3.1 sequence structure
As explained earlier, ordinary binary trees are not suitable for array storage, which will cause a lot of space waste. The complete binary tree is more suitable for array storage. There is a structure called heap in the data structure. It is a complete binary tree, which uses array storage.
It should be noted that the memory is divided at the operating system level. A memory area is called heap, which is a memory area segmentation. The heap in the data structure is a structure.
3.2 definition and structure of reactor
Defines a collection of values
{
k
0
,
k
1
,
k
2
,
.
.
.
,
k
n
−
1
}
\lbrace k_0,k_1,k_2,...,k_{n-1} \rbrace
{k0, k1, k2,..., kn − 1}, which are stored in the array in binary tree order, and meet certain rules:
K
i
≤
K
2
∗
i
+
1
&
&
K
i
≤
K
2
∗
i
+
2
K_i ≤ K_{2*i+1}\; \&\& \; K_i ≤ K_{2*i+2}
Ki≤K2∗i+1&&Ki≤K2∗i+2
K i ≥ K 2 ∗ i + 1 & & K i ≥ K 2 ∗ i + 2 K_i ≥ K_{2*i+1}\; \&\& \; K_i ≥ K_{2*i+2} Ki≥K2∗i+1&&Ki≥K2∗i+2
Satisfaction formula ( 3 ) (3) (3) That is, a heap in which each node is smaller or equal than its child nodes is called a large (root) heap. On the contrary, the formula is satisfied ( 4 ) (4) (4) A heap in which each node is larger or equal than its child nodes is called a small (root) heap.
It can be seen that the heap is a complete binary tree, and the value of a node of the heap is always not less than its child node or not less than its parent node. The heap is not ordered. Only the arrays that satisfy the linear storage binary tree are ordered, they are called binary tree ordered.
3.3 implementation of heap
It can be seen that the logical structure of the heap is a complete binary tree and the physical structure is an array. It can also be said that a binary tree is actually an array, or imagine an array as a binary tree.
Heap structure definition
typedef int HPDataType; typedef strcut heap { HPDataType* a; int size; int capacity; }heap;
Heap insertion
void HeapPush(heap* php, HPDataType x) { assert(php); if (php->size == php->capacity) { int newCapacity = php->capacity == 0 ? 4 : php->capacity * 2; HPDataType* tmp = (HPDataType*)realloc(php->a, sizeof(HPDataType) * newCapacity); if (tmp == NULL) { perror("HeapPush::malloc"); exit(-1); } php->a = tmp; php->capacity = newCapacity; } php->a[php->size] = x; php->size++; //adjustment AdjustUp(php->a, php->size, php->size - 1); }
Heap insertion is to insert at the end of the array, which is equal to adding a leaf node to the binary tree. Since the inserted value is not necessarily, the nature of the heap may be destroyed at this time. Of course, the node will only affect all nodes on the path from the node to the root node, so it needs to be adjusted upward: exchange the node values until the nature of the heap is met.
Heap up adjustment algorithm
void AdjustUp(HPDataType* a, int size, int child) { assert(a); while (child > 0) { int parent = (child - 1) / 2; //Heaps if (a[child] > a[parent]) { Swap(&a[child], &a[parent]); //iteration child = parent; } else { break; } } }
Adjust the algorithm upward, find the parent node from the child upward, and exchange if the child node is larger or smaller than the parent node until it is adjusted to the root node or does not meet the conditions.
It is easy to adjust the heap upward, because there is only one parent node of the node, and you only need to compare it with the parent node.
Deletion of heap
void HeapPop(heap* php) { assert(php); assert(!HeapEmpty(php)); //delete Swap(&php->a[0], &php->a[php->size - 1]); php->size--; //adjustment AdjustDown(php->a, php->size, 0); }
The delete operation is to delete the top elements of the heap, but it is not simply to move the heap array forward one bit, which will greatly change the structure of the binary tree and can not guarantee that the resulting binary tree is a heap. The tail element of the array should be exchanged with the first element, and then the tail element should be deleted and adjusted downward. This is the delete operation of the heap.
Heap down adjustment algorithm
void AdjustDown(HPDataType* a, int size, int parent) { int child = parent * 2 + 1; //Big root pile while (child < size) { //Select large and small nodes if (child + 1 < size && a[child + 1] > a[child]) { child++; } //exchange if (a[child] > a[parent]) { Swap(&a[child], &a[parent]); //iteration parent = child; child = parent * 2 + 1; } else { break; } } }
Changing the tail element to the top of the heap will inevitably change the properties of the heap, but the left and right subtrees of the root node still maintain their original properties. Therefore, it is only necessary to gradually adjust the heap top elements downward: exchange the root node with its larger (smaller) child node. As long as the condition that the parent node is larger or smaller than any of their child nodes is met, let it exchange with the child node until it is exchanged to the leaf node or does not meet the condition.
Exchanging with a larger child node is to restore the heap property, and exchanging with a smaller child node is to restore the small heap property.
Other interface functions
//Initialization and destruction void HeapInit(heap* php) { assert(php); php->a = NULL; php->size = php->capacity = 0; } void HeapDestroy(heap* php) { assert(php); free(php->a); php->size = php->capacity = 0; } //Get heap top data HPDataType HeapTop(heap* php) { assert(php); assert(!HeapEmpty(php)); return php->a[0]; } //Number of elements in the heap int HeapSize(heap* php) { assert(php); return php->size; } //Empty judgment of heap bool HeapEmpty(heap* php) { assert(php); return !php->size; } //Heap printing void HeapPrint(heap* php) { assert(php); for (int i = 0; i < php->size; i++) { printf("%d ", php->a[i]); } printf("\n"); }
Heap creation
Given array a, the array can be regarded as a complete binary tree logically, but it is not necessarily a heap. The creation of heap is to adjust the array into heap through algorithm. Using the idea of heap insertion, array A is built into heap.
Method 1: adjust upward
Starting from the root node, that is, the first element of the array, the array elements are "inserted" into the heap in turn. It is better to "add" than "insert". The subscripts are used to traverse the array elements in turn. The traversed elements are regarded as the nodes of the heap, and they are adjusted every time they are inserted.
If you need to arrange a in ascending order, you might as well try to build the a array into a small heap:
//Build pile void HeapBuild(int* a, int sz) { //Upward adjustment for (int i = 1; i < sz; i++) {//Traverse from the second node to the tail node AdjustUp(a, sz, i); } }
Each time an element is added, it calls AdjustUp to adjust upward from the inserted node to the following node.
This method is actually the same in thought as the interface function Push. It is adjusted once you insert one. The only difference is that the calling interface creates a new space for the heap.
By adding a node, you can adjust from the node to the root, which can also be understood as "building while adjusting".
Method 2: adjust downward
Similarly, the logic of the downward adjustment algorithm is similar to the interface Pop of the call heap. In the process of calling the delete interface of the heap, the head and tail exchange occurs. At this time, the heap is destroyed due to the change of the root node, but the left and right subtrees meet the nature of the heap, so they can be adjusted downward.
The complete binary tree a is not a heap, and any left and right subtrees in it cannot be guaranteed to be a heap. Therefore, you should first adjust downward from the last subtree and traverse backwards from back to front. To be exact, because the subtree where the leaf node is located has only one node, it does not need to be adjusted. It should traverse from the parent node of the tail node to the root node.
//Build pile void HeapBuild(int* a, int sz) { //Downward adjustment for (int i = (sz - 1 - 1) / 2; i >= 0; i--) {//Start from the parent node of the last leaf node to the root node AdjustDown(a, sz, i); } }
The two methods of reactor building, upward adjustment and downward adjustment, are feasible. Whether it is to build a large pile or a small pile, the above code is correct. You only need to change the comparison symbol in the adjustment algorithm.
Starting from the parent node of the tail node of a fully two binary tree, it can also be seen as "build in tune" from the back forward.
Reactor building time complexity
The time complexity of the upward adjustment algorithm is O ( N l o g N ) O(NlogN) O(NlogN) . When building a heap, it is more necessary to cooperate with heap sorting, and the downward adjustment algorithm is used.
The most complex case of downward adjustment algorithm is from the root node of the current subtree to the leaf node. Suppose the current tree has n n n nodes, the height of the tree is h h h. Available:
- The first floor has 2 0 2^0 20 nodes, each node can be adjusted at most h − 1 h-1 h − 1 time,
- The second floor has 2 1 2^1 21 nodes, each node can be adjusted at most h − 2 h-2 h − 2 times,
- And so on h − 1 h-1 h − 1 floor 2 h − 2 2^{h-2} 2h − 2 nodes, each node can be adjusted at most 1 1 Once.
Then the total number of steps of moving nodes in the whole downward adjustment algorithm is T ( n ) T(n) T(n) is the number of nodes ∗ * * number of layers moved down:
so T ( n ) T(n) T(n) is the difference ratio sequence, which is obtained by dislocation subtraction method T ( n ) T(n) T(n) about h h The expression of h, and then n = 2 h − 1 , h = l o g 2 ( n + 1 ) n=2^h-1,h=log_2{(n+1)} n=2h − 1,h=log2 (n+1) will T ( n ) T(n) T(n) is converted to about n n Expression for n.
3.4 application of reactor
Top-K problem
Top-K problem, that is, in N N Find the top N elements K K K maximum values.
Before that, traverse to find the maximum value K times and take out the first K elements after sorting. These two methods are also feasible, but they are not the best method. Let's use heap to solve this problem.
Mode 1: transfer N N Insert N numbers in turn, create a large pile, delete and take the data at the top of the heap K K K times, the result is k maximum values.
The time complexity is O ( N + K l o g 2 N ) O(N+Klog_2N) O(N+Klog2 ^ N), of course, assuming N N If N is very large and there are 1 billion in documents, the above methods are not applicable.
Mode 2:
-
Before use K K Create a heap of k elements for the number of K, and find the first k maximum values to create a small heap; Find the last K minimum values, then build a lot.
-
be left over N − K N-K N − K elements are compared with the data at the top of the heap in turn. If they are larger than the top of the heap, replace the top elements and adjust them downward,
-
At the end of traversal, the data in the last small heap K K K elements are the maximum.
Create a small heap so that the small one is at the top and the large one is at the bottom. Each time, the smallest one will be deleted from the heap, while the largest one will be put into the heap and kept at the bottom of the heap.
void PrintTopK(int* a, int n, int k) { heap hp; HeapInit(&hp); //Create a small heap for (int i = 0; i < k; i++) { HeapPush(&hp, a[i]); } //compare for (int i = k; i < n; i++) { int ret = HeapTop(&hp); if (a[i] > ret) { HeapPop(&hp); HeapPush(&hp, a[i]); } } HeapPrint(&hp); }
You can directly modify the heap top data and adjust it downward, or you can call the interface Pop and then Push, but it's best not to destroy the heap structure and use the interface to implement it.
Heap sort
Heap sorting is to sort the existing arrays by using the implementation idea of heap. Suppose there is an array a={70,56,30,25,15,10,75}, which is a complete binary tree from the logical structure, but it is not necessarily a heap. Therefore, we need to build the array into a heap before we can sort the heap.
Reactor building analysis
The two ways of creating a heap have been introduced earlier. Array A is built into a heap by using the idea of heap insertion. Just call the heap building function.
Suppose you want to arrange a in ascending order and build it into a large pile or a small pile? You might as well try to build the a array into a small heap, and take the data at the top of the heap, that is, the first element of the array is the smallest number. If you want to select the next smaller number, you need to delete the first element and rebuild the heap from the second element, that is, destroy the structure of the heap and rebuild the heap.
The complexity of rebuilding the heap is O ( N ) O(N) O(N), the overall complexity is O ( N 2 ) O(N^2) O(N2), which is obviously undesirable. Since it is not advisable to build small piles in ascending order, try building large piles in ascending order.
Ranking analysis
**Sort by using the idea of heap deletion** If you select ascending order to create a large number of piles, you can follow the following logic:
- Build a pile and choose the largest number;
- The first and last elements are interchanged, so that the maximum number is moved to the end;
- The tail element is not regarded as the node of the heap. It is adjusted downward from the root node, and the next largest number is moved to the first place.
Then swap the head and tail, and repeat this cycle until it is adjusted to the root node, that is, the number of elements is reduced to 0. The time complexity is O ( l o g N ) O(logN) O(logN) .
From this, we can build a large pile in ascending order and a small pile in descending order.
//sort void HeapSort(int* a, int sz) { //1. Pile building HeapBuild(a, sz); //2. Sorting for (int i = sz - 1; i >= 0; i--) {//Number of elements - 1 //Head to tail swap Swap(&a[0], &a[i]); //Downward adjustment AdjustDown(a, i, 0); } }
It can be seen that the ascending order is to traverse from the end to the end, take out the larger value and put it behind the array. Whether it is ascending or descending, it takes out the number that should be placed behind and puts it behind. It is the application of downward adjustment algorithm.
Select ascending order and descending order to create a large heap and a small heap respectively, that is, change the difference between comparison symbols. The time complexity of heap sorting is O ( N l o g N ) O(NlogN) O(NlogN) .
4 chain structure of binary tree
4.1 traversal of binary tree
Chain structure
Generally, the structure of binary tree is too complex to store data, so the addition, deletion, query and modification of binary tree is meaningless. The value of binary tree is reflected in some specific binary trees, such as search binary tree, balanced search binary tree, AVL tree, red black tree, B tree, etc. Learned later in the advanced data structure.
The structure of binary tree is characterized in that binary tree and its subtree can be divided into three parts: root node, left subtree and right subtree.
As shown in the figure, any binary tree can be divided into root, left subtree and right subtree. An empty tree is the smallest unit that can not be subdivided.
Preorder traversal
Learning binary tree structure, we must traverse. Binary tree traversal is to access and operate each node of the binary tree in turn according to a specific rule, and each node is accessed only once.
Traversal mode | explain |
---|---|
Preorder traversal | First access the root node, then the left node, and finally the right node, also known as preorder traversal |
Medium order traversal | First access the left node, then the root node, and finally the right node |
Postorder traversal | First access the left node, then the right node, and finally the root node |
Accessing any binary tree, including its subtrees, is a fixed way. The difference between the three is the order of accessing the root node.
The traversal results of the previous order, middle order and post order of the above binary tree are as follows:
A
B
D
0
0
0
C
E
0
0
F
0
0
A \quad B\quad D\quad 0\quad 0\quad 0\quad C\quad E\quad 0\quad 0\quad F\quad 0\quad 0
ABD000CE00F00
0 D 0 B 0 A 0 E 0 C 0 F 0 0\quad D\quad 0\quad B\quad 0\quad A\quad 0\quad E\quad 0\quad C\quad 0\quad F\quad 0 0D0B0A0E0C0F0
0 0 D 0 B 0 0 E 0 0 F C A 0\quad 0\quad D\quad 0\quad B\quad 0\quad 0\quad E\quad 0\quad 0\quad F\quad C\quad A 00D0B00E00FCA
Writing the empty tree reflects the real way of traversal. Removing the empty tree is the final result.
The idea of traversing the tree in code is recursion:
//Preorder traversal void PreOrder(BTNode* root) { if (root == NULL) { printf("\\0 "); return; } printf("%c ", root->data); PreOrder(root->left); PreOrder(root->right); } //Medium order traversal void InOrder(BTNode* root) { if (root == NULL) { printf("\\0 "); return; } InOrder(root->left); printf("%c ", root->data); InOrder(root->right); } //Postorder traversal void PostOrder(BTNode* root) { if (root == NULL) { printf("\\0 "); return; } PostOrder(root->left); PostOrder(root->right); printf("%c ", root->data); }
The recursive code of the preamble traversal is shown in the figure below:
The recursive call logic of the three traversal methods is exactly the same, but the timing of printing values is different, so the results are different, but the order of accessing nodes is the same.
level traversal
Sequence traversal is traversal from top to bottom layer by layer, and sequence traversal is realized by queue.
void levelOrder(BTNode* root) { if (root == NULL) { return; } Queue q; QueueInit(&q); //1. Join the team at the head node QueuePush(&q, root); while (!QueueEmpty(&q)) { BTNode* front = QueueFront(&q); printf("%d ", front->data); //2. Team leader out of the team QueuePop(&q); //3. Child nodes join the team if (front->left) { QueuePush(&q, front->left); } if (front->right) { QueuePush(&q, front->right); } } QueueDestroy(&q); }
-
Create a queue and enter the root node first,
-
Out of the queue head node, re-enter the child node of the queue head. The end of such a layer will bring the next layer into the team.
-
When the queue is empty, the traversal ends.
Keep the queue not empty and cycle. Only when the child nodes of the last layer are all empty will the queue elements become fewer and fewer, and the final queue is empty.
4.2 basic practice of binary tree
Recursion is the idea of divide and rule, divide and rule - make big things small and small things small. The next few basic binary tree exercises are all implemented by recursive strategy.
Number of binary tree nodes
//1. void BinaryTreeSize(BTNode* root, int* pcount) { if (root == NULL) { return; } (*pcount)++; BinaryTreeSize(root->left, pcount); BinaryTreeSize(root->right, pcount); } //2. int BinaryTreeSize(BTNode* root) { /*if (root == NULL) { return 0; } else { return BinaryTreeSize(root->left) + BinaryTreeSize(root->right) + 1; }*/ return root == NULL ? 0 : BinaryTreeSize(root->left) + BinaryTreeSize(root->right) + 1; }
If you use a counter, you should pass the address of the variable in the main function like OJ.
Using the idea of recursive divide and conquer, finding the number of nodes of any tree can be regarded as a kind of the same problem, that is, the number of nodes of the left subtree + the number of nodes of the right subtree + 1, and then turn the big into the small:
- Find the number of tree nodes of A, that is, the number of left subtree nodes of A + the number of right subtree nodes + 1,
- The left subtree of A is the B tree, and the number of nodes of B tree is the number of nodes of B left subtree + the number of nodes of B right subtree + 1,
- And so on,... Until it meets the critical condition that the empty tree is non separable, it returns 0.
Number of binary leaf nodes
int BinaryTreeLeafSize(BTNode* root) { //Empty if (root == NULL) { return 0; } //For leaf else if (root->left == NULL && root->right == NULL) { return 1; } //Non empty y else { return BinaryTreeLeafSize(root->left) + BinaryTreeLeafSize(root->right); } }
The same logic is that the number of leaf nodes of any tree is the number of leaf nodes of its left subtree + the number of leaf nodes of its right subtree. The critical condition is to find the leaf node, and the characteristic is that the left and right child nodes are empty. Start recursion with the same logic:
- Find the number of leaf nodes of tree A, that is, the number of left child leaf nodes + the number of right child leaf nodes of tree A,
- The left subtree of A is B tree, the number of leaf nodes of B tree is the number of left subtree nodes of B + the number of right subtree nodes, the right subtree of A is C tree, and the number of leaf nodes of C is the sum of the number of left and right subtree nodes. Recurse down,..., until the left and right subtrees of the node are empty, and return 1.
Number of nodes in any layer of binary tree
int BinaryTreeLevelkSize(BTNode* root, int k) { if (root == NULL) { return 0; } if (k == 1) { return 1; } else { return BinaryTreeLevelkSize(root->left, k - 1) + BinaryTreeLevelkSize(root->right, k - 1); } }
- Find the second order of A tree k k The number of k-layer nodes can be transformed into its left and right subtrees, that is, the second of B-tree k − 1 k-1 Number of nodes in k − 1 layer + the second of C tree k − 1 k-1 Number of k − 1 layer nodes.
- Find the second order of B tree k − 1 k-1 The number of nodes in k − 1 layer, that is, the second node of D tree k − 2 k-2 Number of nodes in k − 2 layer + the second of null tree k − 2 k-2 Number of nodes in k − 2 layer.
- By analogy, the number of empty tree nodes is 0. When k=1, it traverses the nodes of layer K. If non empty K is not equal to 0, it is converted to find the number of nodes of the left and right subtrees.
Binary tree height
int BinaryTreeDepth(BTNode* root) { if (root == NULL) { return 0; } else { int leftDepth = BinaryTreeDepth(root->left); int rightDepth = BinaryTreeDepth(root->right); return leftDepth > rightDepth ? leftDepth : rightDepth + 1; } }
The problem of finding height can be transformed into the problem of finding empty nodes. If it is not empty, it will be + 1 until it is empty + 0. Then, using the idea of recursion, find the height of A tree, that is, the maximum height of the left and right subtrees of A tree + 1, the height of B tree, that is, the height of the left and right subtrees of B tree + 1, and the height of C tree, that is, the height of the left and right subtrees of C tree + 1, until an empty tree is encountered.
Of course, you should avoid writing return treedepth (left) > treedepth (right)? TreeDepth(left):TreeDepth(right)+1; This will make frequent calls, compare and calculate once, and calculate the return value again.
Finding the total number of nodes of the tree and finding the height of the tree are classic post order traversal problems, which traverse the left and right trees first and then access the root node.
Binary tree lookup node
BTNode* BinaryTreeFind(BTNode* root, BTDataType x) { if (root == NULL) { return NULL; } if (root->data == x) { return root; } BTNode* leftRet = BinaryTreeFind(root->left, x); if (leftRet) { return leftRet; } BTNode* rightRet = BinaryTreeFind(root->right, x); if (rightRet) { return rightRet; } return NULL; }
Binary tree is a typical preorder traversal when looking for nodes. If a is not, go to the left and right subtrees of A. In the third case, it must be judged and returned only when it is not empty, otherwise the right subtree cannot be traversed.
4.3 binary tree based interview questions
Example 1 Single valued binary tree
bool isUnivalTree(struct TreeNode* root) { if (root == NULL) { return true; } if (root->left && (root->val != root->left->val)) { return false; } if (root->right && (root->val != root->right->val)) { return false; } return isUnivalTree(root->left) && isUnivalTree(root->right); }
A kind of the same subproblem, that is, every three nodes are a group, and whether the values of the node and its left and right child nodes are equal, the critical condition is to return true when empty. Recursion down in turn.
Example 2 Same binary tree
bool isSameTree(struct TreeNode* p, struct TreeNode* q) { // All empty if (!p && !q) { return true; } // Only one is empty if (!p || !q) { return false; } // None is empty if (p->val != q->val) { return false; } return isSameTree(p->left, q->left) && isSameTree(p->right, q->right); }
From the perspective of structure and value, first compare the roots, and then recurse down the left and right subtrees. Either all of them are empty, or only one of them is empty, or none of them are empty, but the values are different. When the values are equal, the left and right child nodes are empty, and recursion returns true to the next layer. The above three conditions include all cases, and then recursion can be done directly.
Example 3 Symmetric binary tree
bool _isSymmetric(struct TreeNode* l, struct TreeNode* r) { if (!l && !r) { return true; } if (!l || !r) { return false; } if (l->val != r->val) { return false; } return _isSymmetric(l->left,r->right) && _isSymmetric(l->right, r->left); } bool isSymmetric(struct TreeNode* root) { if (root == NULL) { return true; } return _isSymmetric(root->left, root->right); }
The left and right subtrees under the root node are given to the sub function, and the sub function is used to recurse. Recursion judgment is the same as the previous question, one to the left, one to the right, and then judge the opposite side.
Example 4 Preorder traversal,Medium order traversal,Postorder traversal
int TreeSize(struct TreeNode* root) { return root == NULL ? 0 : TreeSize(root->left)+TreeSize(root->right) + 1; } void _preorderTraversal(struct TreeNode* root, int* a, int* pi) { if (root == NULL) { return; } //1. a[(*pi)++] = root->val; _preorderTraversal(root->left, a, pi); //2. a[(*pi)++] = root->val; _preorderTraversal(root->right, a, pi); //3. a[(*pi)++] = root->val; } int* preorderTraversal(struct TreeNode* root, int* returnSize) { int size = TreeSize(root); int* a = (int*)malloc(size*sizeof(int)); int i = 0; _preorderTraversal(root, a, &i); *returnSize = size; return a; }
The first, middle and last traversal is still recursive. The difference is that instead of printing out the value of the node, it is put into the dynamically opened array. It is worth noting that there is a problem when creating a new subscript variable in recursion, going down and going back, so the address of the variable should be passed.
Example 5 A subtree of another tree
bool isSubtree(struct TreeNode* root, struct TreeNode* subRoot) { if (!root) { return false; } if (isSameTree(root, subRoot)) { return true; } return isSubtree(root->left, subRoot) || isSubtree(root->right, subRoot); }
Compare all subtrees of root with subRoot. If it is true, it will return. If it is not found, it will recurse in the left and right subtrees. Because the returned value is Boolean, and if it is not found on the left, it will have to find on the right, so you can use | to combine the two returned values.
Assuming that root has n nodes, the best case of the recursive algorithm is that root and subRoot are completely equal. The comparison is successful the first time, and the time complexity is O ( n ) O(n) O(n). In the worst case, only the last node of the two nodes is not equal. Each node is compared, and isSameTree also recurses to the last node. The time complexity is O ( n 2 ) O(n^2) O(n2).
4.4 creation and destruction of binary tree
Binary tree destruction
void BinaryTreeDestroy(BTNode* root) { if (!root) { return; } BinaryTreeDestroy(root->left); BinaryTreeDestroy(root->right); free(root); }
After the node is released, its child nodes cannot be found, so the post order traversal method is adopted.
Determine whether it is a complete binary tree
bool isBinaryTreeComplete(BTNode* root) { if (root == NULL) { return true; } Queue q; QueueInit(&q); QueuePush(&q, root); // 1. Find the first empty node while (!QueueEmpty(&q)) { BTNode* front = QueueFront(&q); QueuePop(&q); if (front == NULL) { break; } else { QueuePush(&q, front->left); QueuePush(&q, front->right); } } // 2. Check whether the remaining nodes in the queue have non empty nodes while (!QueueEmpty(&q)) { BTNode* front = QueueFront(&q); QueuePop(&q); if (front) { return false; } } QueueDestroy(&q); return true; }
The difference between complete binary tree and incomplete binary tree is that there are empty nodes in incomplete binary tree. As long as there are non empty nodes after empty nodes, it means that the binary tree is not a complete binary tree.
- In head node, out of queue head, and re-enter the child node of queue head
- If the queue head is empty, it will jump out of the loop, and then check whether all nodes in the queue are empty. If there are non empty nodes, it means that it is not a complete binary tree.
Example 6 Traversal creation tree
#include <stdio.h> #include <stdlib.h> struct TreeNode { struct TreeNode* left; struct TreeNode* right; char val; }; struct TreeNode* CreateTree(char* str, int* pi) { if (str[*pi] == '#') { (*pi)++; return NULL; } struct TreeNode* root = (struct TreeNode*)malloc(sizeof(struct TreeNode)); root->val = str[(*pi)++]; root->left = CreateTree(str, pi); root->right = CreateTree(str, pi); return root; } void Inorder(struct TreeNode* root) { if (!root) { return; } Inorder(root->left); printf("%c ", root->val); Inorder(root->right); } int main() { char str[100] = {0}; scanf("%s", str); int i = 0; struct TreeNode* root = CreateTree(str, &i); Inorder(root); return 0; }
The string ABC##DE#G##F### represents the preorder traversal sequence of the tree. According to the string, it can be determined that a must be the root node, and then the left subtree and right subtree of the whole string. According to the previous traversal rules, you can create a binary tree.