# 1. Introduction

If I want to buy a mobile phone for IPhone13ProMax, I will search for Baidu.

I've found a lot of related items. I want to buy them cheaper, but I'm afraid I'll run into a liar. What should I do?

Sorting is used at this point, and we can sort by credit, by price, and put the best items that meet our expectations first, and eventually find the ones I would like to buy.

# 2. Basic concept and classification of sorting

Sorting is a problem we often face in our life. When students do exercises, they are arranged from short to high. When the teacher checks the class attendance, the names are called in the order of the student's number. When you enter the college entrance examination, you will be admitted in descending order according to the total score.

Assume that the sequence containing n records is {r1, r2, r3,..., rn}, and its corresponding keywords are {k1, k2, k3,..., kn}, which determines a permutation of p1, p2, p3,..., pn that satisfies the kp1 < kp2,..., kpn (non-decreasing or non-increasing) relationship even if the sequence becomes an ordered sequence of {rp1, rp2, rp3,..., rpn} by keywords. This is known as sorting the speakers by sorting the entire data in ascending or descending order according to a given keyword

For example, what should I do when selecting excellent students with equal total scores?

mysql> select student.name, course.name,score.score from student, course, score where student.id = score.student_id and score.course_id = course.id; +-----------------+--------------------+-------+ | name | name | score | +-----------------+--------------------+-------+ | Li Kui the Black Whirlwind | Java | 70.5 | | Li Kui the Black Whirlwind | Computer Principles | 98.5 | | Li Kui the Black Whirlwind | Advanced Mathematics | 33.0 | | Li Kui the Black Whirlwind | English | 98.0 | | The Grapes | Java | 60.0 | | The Grapes | Advanced Mathematics | 59.5 | | Bai Suzhen | Java | 33.0 | | Bai Suzhen | Computer Principles | 68.0 | | Bai Suzhen | Advanced Mathematics | 99.0 | | Xu Xian | Java | 67.0 | | Xu Xian | Computer Principles | 23.0 | | Xu Xian | Advanced Mathematics | 56.0 | | Xu Xian | English | 72.0 | | Don't want to graduate | Java | 81.0 | | Don't want to graduate | Advanced Mathematics | 37.0 | | Speak in a normal way | Chinese Traditional Culture | 56.0 | | Speak in a normal way | Chinese | 43.0 | | Speak in a normal way | English | 79.0 | | tellme | Chinese Traditional Culture | 80.0 | | tellme | English | 92.0 | +-----------------+--------------------+-------+

Total sorting is done and then sorted by sub-keywords

## 2.1 Sort stability

It is also because sorting is not only for primary keys, but also for multiple keys, which can ultimately be converted to single keyword sorting. We will focus on delaying keyword sorting.

However, there may be two or more records with the same keyword in the sorted record sequence, and the sorting results may not be unique.

Suppose ki=kj (1<=i<=n, 1<=j<=n) and ri is ahead of RJ (i<j) i n the pre-sorted sequence:

ri is still ahead of rj, it is stable ordering

Conversely, rj is not stable when it is ahead of ri

## 2.2 Inner Sort and Outer Sort

Sort indignantly for internal and external sorting based on whether all records with sorting are placed in memory

Internal sorting: All records to be sorted are placed in memory throughout the sorting process

Outside sorting: Because the number of records sorted is too large to be placed in memory at the same time, the entire sorting process requires multiple exchanges of data between the internal and external memory.

The sorting algorithm performance has three main effects:

- Time performance: Compare and move are the main components in the inner row. So reduce the number of comparisons and moves
- Auxiliary space: The storage space required to execute the algorithm in addition to the storage space occupied by sorting
- Algorithmic complexity: refers to the complexity of the algorithm itself rather than the time complexity of the algorithm

## 2.3 Classification

Simple algorithm: Bubble, Select, Interpolate

Improved algorithm: Hill, Heap, Quick Row, Merge

# 3. Sorting algorithm implementation

All sorting is ascending

## 3.1. Bubble sorting

Bubble Sort is an exchange sort whose basic idea is to compare the keywords of adjacent records between two pairs and swap them if they are in reverse order until there are no records in reverse order.

/* Time Complexity: Worst: O(N^2) [1+2+3+...+(n-1)=n*(n-1)/2] Best: O(N) [Data is in positive order, N-1 comparisons, no exchanges] Average: O(N^2) Spatial Complexity: O(1) Stability: Stability Data object: array [Needless, Ordered]-->From the disordered area, compare the two to find the highest value element and put it at the front of the order Scenario: Not normally used */ class bubbleSort { void bubbleSort(int[] arr) { long before = System.currentTimeMillis(); // Repeated comparison exchange for all elements [two numbers need to be compared once, so the number of elements needs to be compared - once] for (int i = 0; i < arr.length - 1; ++i) { // Stand up a flag, and when elements are not exchanged during a sequence traversal, the sequence is proved to be in order. But this improvement doesn't do much to improve performance boolean flag = true; // Do the same work for each pair of adjacent elements, from the first pair at the beginning to the last pair at the end. When this is done, the largest book in the array will "bubble" to the end of the array for (int j = 0; j < arr.length - i - 1; ++j) { // Compare adjacent elements. If the first is bigger than the second, swap the two if (arr[j] > arr[j + 1]) { int tmp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = tmp; flag = false; } } if (flag) { break; } } long after = System.currentTimeMillis(); System.out.println("BubbleSort time: " + (after - before)); } }

Single step diagram:

Add a flag optimized diagram:

## 3.2. Select Sort

Bubble sorting is like short-term speculators who always buy and sell stocks to make differences, but often make few mistakes but get little profit because of high handling fees and stamp duty. Choosing a sort is like making very few moves, looking at the timing, buying and selling decisively with less transactions and more eventual revenue. So when you use it, the smaller the size of the data, the better. The only benefit may be that it doesn't take up extra memory space.

/* Time Complexity: Worst: O(N^2) [1+2+3...+(n-1)=n(n-1)/2, slightly better than bubble] Best: O(N) [Data is in positive order, N-1 comparisons, 0 exchanges] Average: O(N^2) Spatial Complexity: O(1) Stability: Unstable Data Object: Chain List, Array [Unordered, Ordered]-->Find a maximum element in the disordered area behind the ordered area.For arrays: compare more, replace less Scenario: Not normally used */ class selectSort { void selectSort(int[] arr) { long before = System.currentTimeMillis(); // N-1 comparisons in total for (int i = 0; i < arr.length - 1; ++i) { // Need to compare N-i times and find the lowest subscript for each round int minIndex = i; for (int j = i + 1; j < arr.length; ++j) { if (arr[minIndex] > arr[j]) { minIndex = j; } } // Swap when the minimum is found: let the small values follow the large ones in ascending order if (minIndex != i) { int tmp = arr[minIndex]; arr[minIndex] = arr[i]; arr[i] = tmp; } } long after = System.currentTimeMillis(); System.out.println("SelectSort time: " + (after - before)); } }

Single Step Interchange Chart:

## 3.3. Insert Sort

Although the code implementation of insert sort is not as simple and rough as bubble sort and select sort, its principle should be the easiest to understand because anyone who has played poker should be able to understand it in seconds.Insert sort is the simplest and most intuitive sort algorithm, and it works by building an ordered sequence, for unsorted data. Scan backward and forward in the sorted sequence to find the corresponding position and insert.

It should be said that even if you are playing poker for the first time, as long as you know these numbers, you will not need to teach how to deal cards. Move 3 and 4 to the left of 5 and 2 to the far left, even if the order is fine. Here, our licensing method is to insert the sorting method directly.

Like bubble sort, insert sort has an optimization algorithm called split-half insert

/* Time Complexity: Worst: O(N^2)[Number of comparisons: 1+2+3+...(n-1)=n*(n-1)/2, Number of moves: (n-1)+(n-2)+(n-3)...+1 = n*(n-1)/2] Best: O(N) [Positive data order is arr[j]>tmp which means arr[i-1] compared with arr[i]. Because each arr[i-1]<arr[i], there is no exchange, so it is O(N)] Average: O(N^2) [If the sorted records are random, the average number of comparisons and moves is about n^2/4 times based on the principle of equal probability] Spatial Complexity: O(1) Stability: Stability Description: The faster the row, the more time complexity O(N^2) is. Direct insertion sort performs better than bubble and simple selection sort Data Object: Array, Chain List [Ordered Zone, Unordered Zone]-->Insert the first element of the disordered zone into the appropriate location of the ordered zone.For arrays: less compared, much changed Scenario: When you need to optimize part of your code or when you have less data and are most stable, it's an algorithm that is sorted and faster */ class insertSort { void insertSort(int[] arr) { long before = System.currentTimeMillis(); // Consider the first element of the first sequence to be sorted as an ordered sequence, and the second to the last element as an unsorted sequence for (int i = 1; i < arr.length; ++i) { // Scan the unsorted sequence from beginning to end, inserting each element scanned into the appropriate place in the ordered sequence [If the element to be inserted is equal to an element in the ordered sequence, insert the element to be inserted after the same element] int tmp = arr[i]; int j = i - 1; for (j = i - 1; j >= 0 && arr[j] > tmp; --j) { arr[j + 1] = arr[j]; } arr[j + 1] = tmp; } long after = System.currentTimeMillis(); System.out.println("InsertSOrt time:" + (after - before)); } }

Single step diagram:

- i=1, tmp=3, arr[0]=5, arr[0]>tmp;
- Assign arr[0] to arr[1]
- Assign tmp to arr[0]
- 5 in position 3, 3 in position 5

Enter Next Cycle

- i=2, tmp=4, arr[1]=5, arr[1]>tmp;
- Now assign arr[1] to arr[2]
- Assign tmp to arr[1]
- 5 in 4, 4 in 5

Enter Next Cycle

- i=3, tmp=6, arr has no element larger than 6, so it doesn't move

Enter Next Cycle - i=4, all elements in the array are larger than tmp=2, so arr[j+1]=arr[j]; Will have them collectively move back to one unit
- arr[j+1] = tmp fills 2 to the front of the start moving subscript of the last planting, that is, 3

## 3.4. Hill Sorting

Hill Sorting, also known as Decreasing Incremental Sorting, is a more efficient and improved version of Insert Sorting. However, Hill Sorting is an unstable sorting algorithm.

Hill ordering is an improvement based on the following two properties of insertion ordering:

- Insert Sorting is efficient when operating on data that is almost sorted to achieve linear sorting efficiency
- But insert sorting is generally inefficient because insert sorting can only move data one bit at a time

The basic idea of Hill sorting is to divide the whole sequence of records to be sorted into several subsequences for direct insertion and sorting, and then insert and sort all records in turn when they are "basically ordered" in the whole sequence.

### 3.4.1 Hill Sorting Principle

The brief knowledge above outlines the characteristics of Hill ordering, and these principles are described in detail below.

Previous insert ordering, although time complexity is n^2/4, But there are times when efficiency is high: when the data itself is basically ordered, only a small number of inserts are needed to sort the entire data, and the advantage of inserting directly when the number of records is small is particularly obvious.

How can I reduce the number of records?

It's easy to think of grouping a lot of data. It's a bit like grouping forwarding a message in a network, sending it one time, sending it incompletely, sending it many times, and merging it together to make up the whole data.

Hill sorting is to divide the data into several subsequences, where each record has less sorted data, and then insert into each subsequence separately. When the entire sequence is basically ordered, and finally all the segmented data is integrated into one direct insert sort.

Be careful:

{9,1,1,8,3,7,7,4,4,6,6,2}.Now grouped into three groups {9,1,5}, {8,3,7}, {4,6,2} {4,6,2}, even if they are sorted {1,5,9}, {3,7,8}, {2,4,6}, {2,4,6} {1,5,5,9,9,3,3,3,7,8,2,4,6}. At this time they are also out of order, they are not basically ordered at all [[9 in front, 2 in the back], the so-called basic ordering is: small keywords before, big keywords after basic keywords, after big keywords, after basic keywords, Not too small, not too small, in the middle. {2,1,3,6,4,7,5,8,9} This can be called basic ordering

### 3.4.2 Hill sorting algorithm implementation

/* Time Complexity: Worst: O(log^2N) [n logN] Best: O(log^2N) [n logN(3/2) is a logN and also a logN] Average: O(logN) [n logN] Spatial Complexity: O(1) Stability: Unstable Description: Insert sorting is efficient when working with data that is almost sorted to achieve linear sorting efficiency, but insert sorting is generally inefficient because insert sorting moves data by only one bit at a time Data object: array Each round is sorted by a pre-determined gap interval, which decreases in turn and must last be 1 */ class shellSort { void shellSort(int[] arr) { long before = System.currentTimeMillis(); // Select an incremental sequence gap t1, t2,..., tk, where ti > tj, TK = 1 int increment = arr.length; do { increment = increment / 3 + 1; for (int i = increment; i < arr.length; ++i) { int tmp = arr[i]; int j = i - increment; for (; j >= 0 && arr[j] > tmp; j -= increment) { arr[j + increment] = arr[j]; } arr[j + increment] = tmp; } } while (increment > 1); long after = System.currentTimeMillis(); System.out.println("ShellSOrt time:" + (after - before)); } }

Suppose shellSort(arr) passes in an array of {9,1,5,8,3,7,4,6,2}

The increment factor is initially set to the number of records with sorting: arr.length/3+1

- First cycle

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 9 | 1 | 5 | 8 | 3 | 7 | 4 | 6 | 2 |

increment value: 4

i value: 4

tmp value: 3

j value: 0

The value of arr[j] moves after comparing with tmp, arr[0]>tmp just moves on the first element so

Compare once, move once

Then fill the tmp in arr[j+increment]

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 5 | 8 | 9 | 7 | 4 | 6 | 2 |

- Second cycle

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 5 | 8 | 9 | 7 | 4 | 6 | 2 |

increment value: 4

i value: 5

tmp value: 7

j value: 1

The value of arr[j] is compared to tmp and moved before Prime group subscript 1, 0 but there is no element larger than tmp so the loop does not execute

Compare twice, move 0 times

Because j=i-increment

So arr[j+increment] = tmp is arr[1+4] = 7

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 5 | 8 | 9 | 7 | 4 | 6 | 2 |

- Third cycle

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 5 | 8 | 9 | 7 | 4 | 6 | 2 |

increment value: 4

i value: 6

tmp value: 4

j value: 2

The value of arr[j] is moved after comparing with tmp, arr[2]>tmp, so

Compare once, move once

Then assign tmp to arr[j+increment] to fill in

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 4 | 8 | 9 | 7 | 5 | 6 | 2 |

- Fourth cycle

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 4 | 6 | 9 | 7 | 5 | 8 | 2 |

increment value: 4

i value: 7

tmp value: 6

j value: 3

Arr[3]>tmp, so

Compare once, move once

Then reset the fill arr[j+increment]

- Fifth cycle

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 3 | 1 | 4 | 8 | 9 | 7 | 5 | 6 | 2 |

increment value: 4

i value: 8

tmp value: 2

j value: 4, 0

Because j==4, the j termination condition is j>=0 && arr[j]>tmp; The adjustment section is j-=increment;

So any previous arr[j] values larger than tmp will be adjusted to the back, compared twice, moved twice, and finally filled with tmp values for arr[j+increment]

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 2 | 1 | 4 | 8 | 3 | 7 | 5 | 6 | 9 |

- Sixth cycle

subscript | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|

element | 2 | 1 | 4 | 8 | 3 | 7 | 5 | 6 | 9 |

increment value: 4/3+1 = 2;

i value: 2

tmp value: 4

j value: 0

arr[0]<tmp

So compare once, move 0 times

Finally arr[j+increment]=tmp is arr[0+2]=4

Subsequent circular steps are omitted, so we find that the basic ordering is achieved by an incremental factor, which divides the elements in the array into several subsequences, inserts each subsequence, and inserts all the elements in the array into a basic order, that is, when the increment factor is 1, the data is sorted by a direct insert at the end.

### 3.4.3 Hill Sorting Complexity Analysis

Through the analysis of this code, it is believed that everyone knows that Hill sorting is not a random grouping after their own sorting, but will separate a "delta" of record groups into a subsequence, which makes the sorting more efficient.

Incremental selection is critical here. In this paper, increment/3+1 is a mathematical problem. So far, no good incremental sequence has been found. However, a large number of studies have shown that when the incremental sequence is dlta=2^(t-k+1) -1 [0<=k<=t<=log(n+1)], good efficiency can be achieved with O(N^(3/2)). It is important to note that the last increment value of the incremental sequence must be 1 row, and that the Hill sort is not a stable sort because it is a skip exchange

## 3.5. Heap sorting

### 3.5.1 Heap Sorting Principle

Simple Selective Sorting mentioned earlier, which selects the smallest of the n records to be sorted, requires n-1 comparisons

Unfortunately, this operation does not save the comparison results of each trip. In the latter comparison, many comparisons have been made before, but since the results were not saved during the previous comparison, subsequent comparisons repeat these operations because more records are recorded.

If you can save the comparison results while selecting the smallest record each time and adjust them accordingly, the overall efficiency will be high, and stacking is an improvement on the selection sort

So what is heap sorting?

Heapsort is a sort algorithm designed with heap as the data structure. Stacking is a nearly complete binary tree structure that also satisfies the nature of stacking: that is, the key value or index of a child node is always less than (or greater than) its parent node. Heap sorting is a sort selection using the concept of heap.

There are two methods:

- Large Top Heap: Each node has a value greater than or equal to its child nodes, which is used in the heap sorting algorithm for ascending sorting;
- Small Top Heap: Each node has a value less than or equal to its child nodes, which is used in the heap sorting algorithm to sort in descending order

Diagram before and after heap adjustment

Size Top Heap Diagram

### 3.5.2 Heap Sorting Algorithm Implementation

/* Time Complexity: Worst: O(N log N) Best: O(N log N) Average: O(N log N) Spatial Complexity: O(1) Stability: Unstable Data object: array [Large Root Heap [Small Root Heap], Ordered Area]-->Resume the heap structure after removing the root from the top of the heap and placing it in an ordered interval */ class heapSort { void heapSort(int[] arr) { long before = System.currentTimeMillis(); // 1. Adjust ascending order to large root heap createBigHeap(arr); int end = arr.length - 1; // 2. Top (maximum) and tail swap while (end > 0) { int tmp = arr[0]; arr[0] = arr[end]; arr[end] = tmp; // 3. Adjust the heap each time to position the new top element shiftDown(arr, 0, end--); } long after = System.currentTimeMillis(); System.out.println("HeapSort time: " + (after - before)); } private void createBigHeap(int[] arr) { // 1.arr.length-1 picks the last element of the array, and then-1 is to subscript to the parent node for (int parent = (arr.length - 2) >> 1; parent >= 0; --parent) { // 2. Adjust each parent node shiftDown(arr, parent, arr.length); } } private void shiftDown(int[] arr, int parent, int sz) { // 1. Children of parent node int child = (parent << 1) + 1; // 2. while (child < sz) { // 3. Decide that the right child node is not out of bounds and ensure that the value of the left subtree is less than the right subtree if (child + 1 < sz && arr[child] < arr[child + 1]) { //If (child + 1 < SZ && arr [child] > arr [child + 1]) {// The small top heap is after the symbol changes: descending ++child; } // 4. When the left subtree is larger than the right subtree, the left subtree and the parent node are exchanged if (arr[child] > arr[parent]) { //If (arr[child] < arr[parent]) {//Symbol changes are small top heap: descending int tmp = arr[child]; arr[child] = arr[parent]; arr[parent] = tmp; // 5. Update the parent node to drive the child node down parent = child; child = (parent << 1) + 1; } else { // 6. If the left subtree value is smaller than the parent node, no adjustment is required break; } } } }

Detailed step analysis

Figure 1 ⃣ _is a large top heap, with 90 being the maximum, calling 90 and 20 (the end element), as shown in Figure 2 ⃣ As shown, 90 becomes the last element of the entire heap sequence, and 20 is adjusted so that nodes other than 90 continue to satisfy the definition of the heap. See Figure 3 ⃣ Neodymium

Consider 30 and 80 interchanges...

Looking at this, I believe you already understand the basic idea of heap sorting, but there are two issues that need to be solved in order to achieve it.

- How to build a heap from an unnecessary sequence
- How to adjust the remaining elements to a new heap after outputting the heap top elements

To explain them clearly, let's elaborate on the code

This is a big frame of thought

// 1. Adjust ascending order to large root heap createBigHeap(arr); int end = arr.length - 1; // 2. Top (maximum) and tail swap while (end > 0) { int tmp = arr[0]; arr[0] = arr[end]; arr[end] = tmp; // 3. Adjust heap each time to adjust new top and tail elements shiftDown(arr, 0, end--); }

Suppose we want to sort the sequential {50,10,90,30,70,40,80,60,20}. Then end = 8 while loops to swap the top and tail elements each time, and then resizes the heap after swapping.

Look again at the createBigHeap function

private void createBigHeap(int[] arr) { // 1.arr.length-1 picks the last element of the array, and then-1 is to subscript to the parent node for (int parent = (arr.length - 2) >> 1; parent >= 0; --parent) { // 2. Adjust each parent node shiftDown(arr, parent, arr.length); } }

The first time you pass 9 into a function, it starts at 4 and ends at 1 [contains 1, where 1 is actually the position of the array subscript 0 element, so that you can interpret it temporarily and understand it as 1]

They all have a child's parent node, note the subscript number of the grey node

4 Where did it come from?

Remember for (int parent = (arr.length - 2) > 1; parent >= 0; --parent)?

9-2=7, 7/2=3, 3+1=4

4 is the left subtree, 4+1=5 is the right subtree

Pare- >4,3,2,1 then adjusts the left and right subtrees of each parent node to make it reach a large top heap.

Knowing which nodes to adjust, we're looking at how the key shiftDown function is implemented

private void shiftDown(int[] arr, int parent, int sz) { // 1. Children of parent node int child = (parent << 1) + 1; // 2. while (child < sz) { // 3. Decide that the right child node is not out of bounds and ensure that the value of the left subtree is less than the right subtree if (child + 1 < sz && arr[child] < arr[child + 1]) { //If (child + 1 < SZ && arr [child] > arr [child + 1]) {// The small top heap is after the symbol changes: descending ++child; } // 4. When the left subtree is larger than the right subtree, the left subtree and the parent node are exchanged if (arr[child] > arr[parent]) { //If (arr[child] < arr[parent]) {//Symbol changes are small top heap: descending int tmp = arr[child]; arr[child] = arr[parent]; arr[parent] = tmp; // 5. Update the parent node to drive the child node down parent = child; child = (parent << 1) + 1; } else { // 6. If the left subtree value is smaller than the parent node, no adjustment is required break; } } }

- The first time a function is called, it is passed in

arr={50,10,90,30,70,40,80,60,20}

parent=3

sz=9

child=parent2+1 gets 7

While (7 < 9) is established

child+1 right subtree does not cross the boundary, but left subtree 7 ⃣ >Right Subtree 8 ⃣ So child5 ⃣ Do not self-increase

Child7 ⃣ > child3 ⃣ , exchange parent3 ⃣ And child7 ⃣ The value of

The new parent node is parent7 ⃣ , its smallest left subtree has crossed the border while ((27+1) < 9), so the loop ends

After adjustment

- The second time a function is called, it passes in

arr={50,10,90,30,70,40,80,60,20}

parent=2

sz=9

child=2*2+1 gets 5

While (5 < 9) is established

child+1 right subtree not out of bounds and left subtree child 5 ⃣ <right subtree child6 ⃣ , child5 ⃣ Node Self-Increasing

Right subtree child6 ⃣ <parent2 ⃣ So break out without swapping to launch a cycle

- When the function is called the third time, it is passed in

arr={50,10,90,30,70,40,80,60,20}

parent=1

sz=9

child=12+1 gets 3

While (3 < 9) is established

child+1 right subtree not out of bounds and left subtree child 3 ⃣ >Right Subtree 4 ⃣ , child3 ⃣ Node does not grow by itself

child3 ⃣ <parent1 ⃣ , so no exchange

New parent3 ⃣ Neo, new child7 ⃣ Neodymium

while(7<9)

7+1 is not out of bounds, child7 ⃣ >child8 ⃣ , do not increase by itself

child7 ⃣ <parent3 ⃣ No exchange

New parent7 ⃣ , the new child [27+1] has crossed 9 subscripts so it will introduce a cycle

4. When the function is called the third time, it is passed in

arr={50,10,90,30,70,40,80,60,20}

parent=0

sz=9

child=20+1

While (1 < 9) is established

(1+1) <9 and left subtree child1 ⃣ >Right subtree child2 ⃣ , so child1 ⃣ Do not increase by itself;

child1 ⃣ >parent0 ⃣ , so no exchange

New parent2 ⃣ Neo, new child5 ⃣ Neodymium

(5+1) < 9 and child5 ⃣ <child6 ⃣ , so++ child5 ⃣ ;

child6 ⃣ <parent2 ⃣ So exchange

New parent6 ⃣ , the new child [26+1] has crossed the line to introduce a cycle

- Since parent = is -1 after the end of the fourth loop, the first adjustment ends

I found it was already a big heap

- In the View

while (end > 0) { int tmp = arr[0]; arr[0] = arr[end]; arr[end] = tmp; // 3. Adjust the heap each time to position the new top element shiftDown(arr, 0, end--); }

Swap heap top elements with heap tail elements

Then the shiftDown function continues to adjust the heap after size-1 [i.e. after excluding 90]

... and then iterate indefinitely until the heap size is 1 and the adjustment is complete, which is the ascending array

### 3.5.2 Heap Sorting Complexity Analysis

/** * Number of nodes in common in each layer: 2^0 2^1 2^2...Number of nodes in the second-last layer: 2^(n-2) * Height adjusted for each layer: h-1 h-2 h-3...Height of the second last layer: 1 * <p> * Number of nodes per layer*Height==Time complexity * 2^0 + 2^1 +...+ 2^(n-1) * h-1 h-2 h-n * T(N)=2^0*(h-1)+2^1*(h-2)+2^2*(h-3)+2^3*(h-4)...+2^(h-3)*2 + 2^(h-2)*1 * 2*T(N)=2^1*(h-1)+2^2*(h-2)+2^3*(h-3)+2^4*(h-4)...+2^(h-2)*2 + 2^(h-1)*1 * <p> * T(N)=1-h + 2^1 + 2^2 + 2^3 +..+2^(h-2) + 2^(h-1) * T(N) = 2^1 + 2^2 + 2^3 +..+2^(h-1) + 1-h * Sum of equal-ratio columns: 2^h-1 * h = logN+1 * <p> * Total number of nodes: 2^h-1 */

The mathematical formula above explains why the heap's sorting time complexity is O(logN)

Its running time is mainly lost on building and rebuilding the heap. During the building process, since we are the lowest and rightmost non-terminal node in the complete binary tree, we start building the heap, compare it with other children and if there is any interchange, for each non-terminal node, there are actually at most two comparisons and interchanges, so the time complexity of building the whole heap is O(n)

In formal ordering, the rebuild heap time complexity is O(nlongN) because of the dog bite construction at each node

So the time complexity of heap sorting is O(nlogN), and since sorting is insensitive to the sorting state of the original record, the O(nlogN) is the best, worst, and average time complexity, which obviously outperforms bubble, simple selection, and direct insertion.

## 3.6. Merge Sort

Merge sort is an effective sorting algorithm based on merge operation. This algorithm is a very typical application of Divide and Conquer.

As a typical arithmetic application of the idea of divide and conquer, merge ordering is implemented by two methods:

- Top-down recursion (all recursive methods can be overridden with iteration, so there is a second method);
- Bottom-up iteration;

To clarify the idea here more clearly, {16,7,13,10,9,15,3,2,5,8,12,1,11,4,6,14} merges into an ordered array by sorting two pairs of merges. Note that its shape resembles an inverted complete binary tree, and sorting algorithms that usually involve a complete binary tree structure are generally inefficient.

### 3.5.1 Recursive implementation of merge sort

/* Time Complexity: Worst: O(logN) Best: O(logN) Average: O(logN) Spatial Complexity: O(N) Stability: Stability Data Object: Array, Chain List Divide the data into two segments, moving the smallest elements from one segment to the end of a new data segment. You can go from top to bottom or from bottom to top The worst case for quick sorting is O(n) ²)， For example, the fast ranking of a sequential number column. However, its spread expected time is O(nlogn), and the constant factor implied in the O(nlogn) notation is very small (between 1.3 and 1.5), which is much smaller than a merge sort whose complexity is stable equal to O(nlogn). Therefore, for most random number columns with weak sequence, fast ranking is always better than merge sort. */ class mergeSort { void mergeSort(int[] arr) { long start = System.currentTimeMillis(); _mergeSort(arr, 0, arr.length); long end = System.currentTimeMillis(); System.out.println("mergeSort time:" + (end - start)); } // Auxiliary Recursion private void _mergeSort(int[] arr, int left, int right) { if (right - left <= 1) { /* Determine if the current interval has only one element or no elements Sorting is not required at this time */ return; } else { int mid = (left + right) >> 1; /* Let [left, mid] intervals become ordered first Let [mid, right] intervals become ordered again Merge two ordered intervals Binary Tree Postorder Traversal */ _mergeSort(arr, left, mid); _mergeSort(arr, mid, right); merge(arr, left, mid, right); } } /* The core operation of merge sort is to merge two ordered arrays and use merge method to complete the process of merging arrays. Here the two arrays are described by the parameters left, mid, right [left, mid): Left Array [mid, right): Right Array */ private void merge(int[] arr, int left, int mid, int right) { /* 1. Create a temporary space first: Save the merged results 2. Temporary space requires two arrays that can be saved for merging: right-left */ if (left >= right) {// Empty interval return; } else { int[] tmp = new int[right - left]; int tmpIndex = 0;// Indicates where the current element should be placed in temporary space int cur1 = left; int cur2 = mid; while (cur1 < mid & cur2 < right) {// Guarantee interval validity if (arr[cur1] <= arr[cur2]) {// To ensure stability tmp[tmpIndex++] = arr[cur1++];// Insert the element corresponding to cur1 into temporary space } else { tmp[tmpIndex++] = arr[cur2++]; } } // After the loop ends, you need to copy the remaining elements into the final result as well while (cur1 < mid) { tmp[tmpIndex++] = arr[cur1++]; } while (cur2 < right) { tmp[tmpIndex++] = arr[cur2++]; } /* You also need to put the tmp results back into the arr array. (Sort in place) Replace the sorted result with the [left, right] interval of the original array */ for (int i = 0; i < tmp.length; i++) { arr[left + i] = tmp[i]; } } } /* Recursive process: gradually slicing an array Non-recursive version: just adjust the subscript [faster] Unify for arrays of length 1 1.[0], [1] Are two arrays to be merged 2.[2], [3] Are two arrays to be merged 3.[4], [5] Are two arrays to be merged 4.[6], [7] Are two arrays to be merged 5.[8], [9] Are two arrays to be merged Unify for arrays of length 2 [0,1], [2,3] [4,5], [6,7] [8,9], [10,11] [0,1,2,3], [4,5,6,7] [8,9,10,11], [12,13,14,15] */ void mergeSortByLoop(int[] arr) { long start = System.currentTimeMillis(); int gap = 1;// gap is used to limit the length of groups, each array to be merged for (; gap < arr.length; gap *= 2) { // Current two merged arrays for (int i = 0; i < arr.length; i += 2 * gap) { /* Controls the merging of two adjacent arrays in this array [left, mid) And [mid, right] are about to merge gap:1 i:0 0,1 1,2 i:2 2,3 3,4 i:4 4,5 5,6 ... gap:2 i:0 0,2 2,4 i:4 4,6 6,8 i:8 8,10 10,12 ... gap:4 ... gap:8 i:0 0,8 8,16[The length of the array tested is 10) --> 8,10 i:16 Beyond bounds */ int left = i; int mid = i + gap > arr.length ? arr.length : i + gap; int right = i + 2 * gap > arr.length ? arr.length : i + 2 * gap; merge(arr, left, mid, right); } } long end = System.currentTimeMillis(); System.out.println("mergeSortByLoop time:" + (end - start)); } }

Code Detailed Step Parsing:

void mergeSort(int[] arr) { long before = System.currentTimeMillis(); mergeSortInternal(arr, 0, arr.length); long after = System.currentTimeMillis(); System.out.println("MergeSort time: " + (after - before)); }

Function call entry, enter a left-closed right-open interval [0, arr.length)

private void mergeSortInternal(int[] arr, int left, int right) { if (right - left <= 1) { return; } else { int mid = (left + right) >> 1; mergeSortInternal(arr, left, mid); mergeSortInternal(arr, mid, right); merge(arr, left, mid, right); } }

Perform interval division left-right recursion

Finally, it is given to merge to merge [left, mid], [mid, right] two intervals

private void merge(int[] arr, int left, int mid, int right) { int[] tmp = new int[right - left + 1]; int tmpIndex = 0; int cur1 = left, cur2 = mid; while (cur1 < mid && cur2 < right) { if (arr[cur1] <= arr[cur2]) {// To ensure stability tmp[tmpIndex++] = arr[cur1++];// Insert the element corresponding to cur1 into temporary space } else { tmp[tmpIndex++] = arr[cur2++]; } } // Processing Remaining Data while (cur1 < mid) { tmp[tmpIndex++] = arr[cur1++]; } while (cur2 < right) { tmp[tmpIndex++] = arr[cur2++]; } // The data returns to the original array position, so add arr[left+i] instead of arr[i] for (int i = 0; i < tmpIndex; i++) { arr[left + i] = tmp[i]; } }

Specific algorithm implementation for interval merging

Assuming you have {50,10,90,30,70,40,80,60,20} data, how does the recursive code execute?

In fact, the incoming left interval value is 0, and the right interval value is 9, which means [0, 9]

Then divide it into [0,4] and [4,9] left intervals.

[0,2), [2, 4), [4,6), [6, 9)

Since the recursion termination condition is right-left<=1, that is, when there are two elements, recursion is introduced and merged through the merge function

### 3.5. Non-recursive implementation of recursive merge ordering

Non-recursive code

void mergeSortTraversalNo(int[] arr) { long before = System.currentTimeMillis(); // gap: Limit the length of each array to be merged int gap = 1; for (; gap < arr.length; gap *= 2) { // Current two merged arrays for (int i = 0; i < arr.length; i += 2 * gap) { int left = i; int mid = i+gap> arr.length? arr.length : i+gap; int right = i+2*gap> arr.length? arr.length : i+2*gap; merge(arr, left, mid, right); } } long after = System.currentTimeMillis(); System.out.println("MergeSortTraversalNo time: " + (after - before)); }

Just remember the variables gap and I,

The left interval is i, the middle interval is i+gap, and the right interval is i+2*gap

Where left, middle and right both keep the array boundless

### 3.5. Merge Sort Complexity Analysis

Let's analyze the time complexity of merge ordering. A merge requires two merges of an ordered sequence of adjacent length h in arr[1]~arr[n]. Putting the results in tmp[1]~TR1[n], this requires scanning all the records in the sequence to be sorted once, so it takes 0(n) time, which is known by the depth of the full binary tree, and the entire merge ordering requires [log2n] Secondly, therefore, the total time complexity is 0(nlogn), and this is the best, worst, and average time performance of the merge sort algorithm.

Since merge sorting requires the same amount of storage space as the original record sequence to store the merge results and stack space at log2n depth during recursion, the spatial complexity is 0(n+logn).

In addition, after careful study of the code, it is found that if (arr[cur1]<=arr[cur2]) statement exists in the merge function, which means it requires two-to-two comparisons and there is no jump, so merge sort is a stable sorting algorithm.

That is, merge sort is a memory-intensive but efficient and stable algorithm.

## 3.7. Quick Sorting

Finally, our master is here. If your boss asks you to write a sorting algorithm in the future and you don't even have a quick sorting in your algorithm, or if you don't make a fuss about it, sneak into it and practice it, at least not ridiculed.

The Hill sort is equivalent to an upgrade to the direct insert sort, which belongs to the insert sort class

Heap sorting is equivalent to an upgrade to simple selection sorting, and they belong to both selection sorting

Quick Sort is considered the slowest upgrade to Bubble Sort, both of which belong to the Exchange Sort class

Quick sorting is also achieved by constantly comparing and moving exchanges, but its implementation increases the distance between comparison and movement of records, moving records with large keywords directly from the front to the back; Records with smaller keywords move directly from the back to the front, reducing the total number of comparisons and mobile exchanges

### 3.7.1 Basic Ideas of Quick Sort Algorithm

The records to be sorted are divided into two separate parts by one-time sorting, in which one part of the records has smaller keywords than the other part of the records. The two parts of the records can be sorted separately to achieve the purpose of ordering the whole sequence.

### 3.7.2 Fast Sorting Recursive Algorithm Implementation and Optimization Steps

/* Time Complexity: Worst case: O(N^2) if it is an ordered sequence Best: suppose you have N data to form a full binary tree. The sum of left and right subtrees traversed in each layer is N, and the height of the number is log2(N+1) [rounding up], or O(N log N) Average: O(N log N) Spatial Complexity: Worst case: O(logN), traversing the left subtree and then the right subtree, the space of the left subtree will be released. When traversing the left subtree, there must be a tree on each level, so the space complexity is the height of the tree. Best: O(N) Stability: Unstable Data object: array [Decimals, Base Elements, Large Numbers]-->Randomly select an element in the interval as the base, place elements smaller than the baseline value after the baseline, and sort the decimal and majority areas separately */ class quickSort { void quickSort(int[] arr) { long start = System.currentTimeMillis(); quick(arr, 0, arr.length-1); long after = System.currentTimeMillis(); System.out.println("quickSort`time:" + (after - start)); } private void quick(int[] arr, int left, int right) { if (left >= right) { return; } else { int pivot = partition(arr, left, right); quick(arr, left, pivot - 1); quick(arr, pivot + 1, right); } } private int partition(int[] arr, int left, int right) { int tmp = arr[left]; while (left < right) { while (left < right && arr[right] >= tmp) { --right; } arr[left] = arr[right]; while (left < right && arr[left] <= tmp) { ++left; } arr[right] = arr[left]; } arr[left] = tmp; return left; } }

code analysis

- Start with quick(arr, 0, arr.length-1)

Closed interval of incoming [0, arr.length-1] [Note interval parametric difference from merge sort] - Re-execution

private void quick(int[] arr, int left, int right) { if (left >= right) { return; } else { int pivot = partition(arr, left, right); quick(arr, left, pivot - 1); quick(arr, pivot + 1, right); } }

This is dividing recursively

- Re-execution

private int partition(int[] arr, int left, int right) { int tmp = arr[left]; while (left < right) { while (left < right && arr[right] >= tmp) { --right; } // Find the small one on the right and move it directly to the left arr[left] = arr[right]; while (left < right && arr[left] <= tmp) { ++left; } // If you find a large one on the left, move it directly to the right arr[right] = arr[left]; } //Then fill the hub and return to the hub value arr[left] = tmp; return left; }

A hub that directly moves elements at both ends and returns the hub value

There are still many things you can improve on the quick sorting just now

Optimize Pivot Selection

If the pivot we select is in the middle of the whole sequence, then we can divide the whole sequence into decimal and majority sets. Note, however, that only if, in case of bad luck, a minimum or maximum value is chosen as the hub to divide the entire array, then such a division will result in inefficiency

Some people say that the number between left and right should be chosen randomly. Although performance solves the performance bottleneck of ordered sequence quick sorting, random is likely to hit luck. Random to an extreme does not mean giving away?

Below is the addition of random selection

private void quick(int[] arr, int left, int right) { if (left >= right) { return; } else { Random random = new Random(); int rand = random.nextInt(right - left) + left + 1; int tmp = arr[left]; arr[left] = arr[rand]; arr[rand] = tmp; int pivot = partition(arr, left, right); quick(arr, left, pivot - 1); quick(arr, pivot + 1, right); } }

Further improvements will result in a three-digit selection

private void quick(int[] arr, int left, int right) { if (left >= right) { return; } else { medianOfThree(arr, left, right); int pivot = partition(arr, left, right); quick(arr, left, pivot - 1); quick(arr, pivot + 1, right); } } private void medianOfThree(int[] arr, int left, int right) { //arr[mid]<arr[left]<arr[right] int mid = (left + right) >> 1, tmp = 0; if (arr[mid] > arr[left]) { tmp = arr[mid]; arr[mid] = arr[left]; arr[left] = tmp; } if (arr[mid] > arr[right]) { tmp = arr[mid]; arr[mid] = arr[right]; arr[right] = tmp; } if (arr[left] > arr[right]) { tmp = arr[left]; arr[left] = arr[right]; arr[right] = tmp; } }

Select three keywords to sort, and use the middle number as the hub. Generally, you can choose the left, middle and right numbers randomly. So at least this middle number will not be the minimum or maximum number. Probably, it is very unlikely that all three numbers will be the minimum or maximum. Therefore, the possibility of the middle number being in the middle is greatly improved.

Since the whole sequence is out of order, randomly selecting three numbers is the same thing as taking three numbers from the left, middle and right, and random number generator itself incurs time overhead, so random generation is not considered.

Optimize recursive operations

private void quick(int[] arr, int left, int right) { if (left >= right) { return; } else { medianOfThree(arr, left, right); while(left < right){ int pivot = partition(arr, left, right); quick(arr, left, pivot - 1); left = pivor + 1; } } }

Sorting scheme when optimizing decimal arrays

Direct insertion is the best performance in simple sorting when the array is very small because fast sorting uses recursive operations. This performance impact is ignored relative to its overall algorithm advantage when sorting large amounts of data, but it becomes a chicken-killer problem when only a few records of the array need to be sorted.

private void quick(int[] arr, int left, int right) { if (left >= right) { return; } else { // Hill sort uses Hill sort faster than insert sort, and can be optimized with direct insert sort if the amount of data is small enough if (right - left <= 150) { int gap = arr.length - 1; while (gap > 0) { for (int i = gap; i < arr.length; i++) { int tmp = arr[i]; int j = i - gap; for (j = i - gap; j >= 0 && arr[j] > tmp; j -= gap) { arr[j + gap] = arr[j]; } arr[j + gap] = tmp; } gap >>= 1; } } else { /* 3.Middle of Three Numbers arr[mid]<arr[left]<arr[right] */ // medianOfThree(arr, left, right); /* 2. Random selection Optimize by luck */ // Random random = new Random(); // int rand = random.nextInt(right - left) + left + 1; // int tmp = arr[left]; // arr[left] = arr[rand]; // arr[rand] = tmp; /* 1. Fixed Value Selection */ int pivot = partition(arr, left, right); quick(arr, left, pivot - 1); quick(arr, pivot + 1, right); } } }

### 3.7.4 Quick Sort Non-recursive Implementation

// Non-recursive Fast Row void quickSortTraversalNo(int[] arr) { long before = System.currentTimeMillis(); Stack<Integer> stack = new Stack<>(); stack.push(0); stack.push(arr.length - 1); while (!stack.empty)) { // Note Access Order int right = stack.pop(); int left = stack.pop(); if (left >= right) { continue; } else { int pivot = partition(arr, left, right); stack.push(left); stack.push(pivot - 1); stack.push(pivot + 1); stack.push(right); } } long after = System.currentTimeMillis(); System.out.println("QuickSortTraversalNo time: " + (after - before)); }

Sorting algorithm | Average Time Complexity | Best case | Worst case | Spatial Complexity | stability |
---|---|---|---|---|---|

Bubble sort | O(N^2) | O(N) | O(N^2) | O(1) | Stable |

Select Sort | O(N^2) | O(N^2) | O(N^2) | O(1) | Instable |

Insert Sort | O(N^2) | O(N) | O(N^2) | O(1) | Stable |

Shell Sort | O(logN) | O(NlogN) | O(Nlog^2N) | O(1) | Instable |

Heap Sorting | O(NlogN) | O(NlogN) | O(NlogN) | O(1) | Instable |

Merge Sort | O(NlogN) | O(NlogN) | O(NlogN) | O(N) | Stable |

Quick Sort | O(NlogN) | O(NlogN) | O(NlogN^2) | O(N) | Instable |