Review and consolidation of sorting algorithm

Major sorting algorithms

A brief review of various sorting algorithms

Bubble sorting

Idea: repeatedly visit the sequence to be sorted, compare two adjacent numbers, and exchange if the sequence is wrong. Every time you go through the sorted sequence, there will be a sequence from the most value in the non arranged sequence to the arranged sequence.

Stability: if the adjacent elements are equal and do not exchange, the bubble sorting is stable

Applicable scenario: less code, simple thinking, suitable for sorting with less data. However, the time complexity of the algorithm is relatively high, so it is not suitable for sorting large volume data.

Optimization: set a flag. If it's the best case, just walk through it.
Best case time complexity: O(n)
Time complexity in other cases: O(n^2)

code:

#include <iostream>
#include <stdio.h>
using namespace std;

void bubbleSort(int arr[],int len){
   int flag,temp;
    for(int i=len-1;i>0;i--){
        flag=0;
        for(int j=0;j<i;j++){
            if(arr[j]>arr[j+1]){
                temp=arr[j];
                arr[j]=arr[j+1];
                arr[j+1]=temp;
                flag=1;
            }
        }
        if(flag==0) break;
    }

}

review:
Find the length of int array []
len=(int)sizeof(array)/sizeof(*array)
len=(int)sizeof(array)/sizeof(int)
len=(int)sizeof(array)/sizeof(array[0])

Application thinking: it can be used to find several maximum values of a number series, because one maximum value floats to one side in one trip.

Select sort

Idea: it is also an exchange sort, which is a bit similar to bubble sort. Find the most value from the unordered sequence and put it at the beginning of the arranged sequence until the sorting is completed.

Stability: generally implemented with arrays, unstable.
Applicable scenario: sorting with less data
Time complexity: O(n^2)

#include <iostream>
#include <stdio.h>
using namespace std;
//Put the small one in the front
void selectSort(int arr[],int len){
    int temp,mark,i,j;
    for(i=0;i<len;i++){
            mark=i;
        for(j=i+1;j<len;j++){
            if(arr[mark]>arr[j]) mark=j;
        }

        if(mark!=i){
            temp=arr[i];
            arr[i]=arr[mark];
            arr[mark]=temp;
        }
    }
}

//Put the big one in the back
void selectSort(int arr[],int len){
    int temp,mark,i,j;
    for(i=len-1;i>=0;i--){
            mark=i;
        for(j=i-1;j>=0;j--){
            if(arr[mark]<arr[j]) mark=j;
        }

        if(mark!=i){
            temp=arr[i];
            arr[i]=arr[mark];
            arr[mark]=temp;
        }
    }
}

Note: after understanding the essence, there can be many changes! Don't be too careless!

Insert sort

Idea: for the non arranged sequence, traverse the arranged sequence, find the appropriate position and insert the non arranged elements

Description: at the beginning, semicolons are arranged and not arranged. Generally, the first element is arranged by default. From the second element, find the appropriate insertion position in the arranged sequence. Until sorting is complete.

Stability: you don't need to change if you find a number not greater than the current number. Look from front to back, so insertion sorting is the most stable sorting algorithm.

Time complexity: O(n^2)
Applicable scenario: not suitable for large volume data

#include <iostream>
#include <stdio.h>
using namespace std;

//Default head end queued
void insertSort(int arr[],int len){
    int position,value;
    for(int i=1;i<len;i++){
        //By default, the first element is arranged, and the search insertion starts from the second element
        position=i;
        value=arr[position];
        //Record the information of the current element to be inserted
        while(position>0&&arr[position-1]>value) //From small to large row
        {
            arr[position]=arr[position-1];//Big move back
            position--;
        }
        //Finally, the position to be inserted is determined
        arr[position]=value;
    }
}

//The default tail is lined up
void insertSort(int arr[],int len){
    int position,value;
    for(int i=len-2;i>0;i--){
        //By default, the last element is arranged, and the machine seeking insertion starts from the penultimate element
        position=i;
        value=arr[position];
        //Record the information of the current element to be inserted
        while(position<len-1&&arr[position+1]>value) //From small to large row
        {
            arr[position]=arr[position+1];//Big move back
            position++;
        }
        //Finally, the position to be inserted is determined
        arr[position]=value;
    }
}

Note: after understanding, note that the traversal directions of the arranged and non arranged are basically ok!

Merge sort

Idea: using the merging operation, the subsequences are sorted first, and then the subsequences are sorted. The subsequences arranged side by side form an ordered sequence.
Sorting and merging of two ordered sub columns.
If two ordered tables are merged into one, it is called 2-way merging.

Time complexity: O(nlogn)
Space complexity: O(n)
Stability: stable
Applicable scenario: because of low time complexity, it can be used when there is a large amount of data, but it can't be used if it is too large, because it will generate more additional space.

Two implementations:
1. Recursive method:

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;
void Merge(int arr[],int tempA[],int L,int R,int rightEnd){
    int leftEnd,num,tempStart;
    tempStart=L;
    leftEnd=R-1;
    num=rightEnd-L+1;
    while(L<=leftEnd&&R<=rightEnd){
        if(arr[L]<=arr[R]) tempA[tempStart++]=arr[L++];//When equal, put the left one first to ensure stability
        else tempA[tempStart++]=arr[R++];
    }
     while(L<=leftEnd){
       tempA[tempStart++]=arr[L++]; //Put the rest on the left
     }
     while(R<=rightEnd){
            tempA[tempStart++]=arr[R++];//Put the rest on the right
    }

    for(int i=0;i<num;i++,rightEnd--){
        arr[rightEnd]=tempA[rightEnd];
    }
}

void Msort(int arr[],int tempA[],int L,int rightEnd){
    int center;
    if(L<rightEnd){
        center=(rightEnd+L)/2;
        Msort(arr,tempA,L,center);
        Msort(arr,tempA,center+1,rightEnd);
        Merge(arr,tempA,L,center+1,rightEnd);
    }

}
void mergeSort(int arr[],int len){//Unified function interface
    int  *tempA;
    tempA=(int*)malloc(len*sizeof(int));//Many intermediate operations are omitted
    if(tempA!=NULL){
        Msort(arr,tempA,0,len-1);
        free(tempA);
    }
    else cout<<"Insufficient space";
}

Refer to the excellent course of data structure of Zhejiang University in mooc.
The teacher spoke very well.
Note: the simplification technique of generating space and operation in the middle!

2. Non recursive implementation:

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;

void Merge(int arr[],int tempA[],int L,int R,int rightEnd){
    int leftEnd,num,tempStart;
    tempStart=L;
    leftEnd=R-1;
    num=rightEnd-L+1;
    while(L<=leftEnd&&R<=rightEnd){
        if(arr[L]<=arr[R]) tempA[tempStart++]=arr[L++];//When equal, put the left one first to ensure stability
        else tempA[tempStart++]=arr[R++];
    }
     while(L<=leftEnd){
       tempA[tempStart++]=arr[L++]; //Put the rest on the left
     }
     while(R<=rightEnd){
            tempA[tempStart++]=arr[R++];//Put the rest on the right
    }

    for(int i=0;i<num;i++,rightEnd--){
        arr[rightEnd]=tempA[rightEnd];
    }
}

void Merge_pass(int arr[],int tempA[],int len,int length){

    int i,j;
    for(i=0;i<=len-2*length;i+=2*length)
        Merge(arr,tempA,i,i+length,i+2*length-1);
    if(i+length<len) Merge(arr,tempA,i,i+length,len-1);
    else
        for(j=i;j<len;j++) tempA[j]=arr[j];
}

void Merge_Sort(int arr[],int len){
    int length=1;
    int* tempA;
    tempA=(int*)malloc(len*sizeof(int));
    if(tempA!=NULL){
        while(length<len){
            Merge_pass(arr,tempA,len,length);
            length*=2;
            Merge_pass(tempA,arr,len,length);
            length*=2;

        }
        free(tempA);
    }
    else cout<<"Insufficient space"<<endl;
}

The core of both implementations is merge operation.
The difference is that one way of using merge operation is recursive and the other is iterative.

Quick sort

Idea: use the divide and rule strategy to find a central element in the number sequence to be arranged. Larger than the central element on the right and smaller than the central element on the left. Use quick row to row left and right respectively.

The selection of central elements is very important, which will affect the complexity of the algorithm.
If the first element is directly selected as the principal element, the time complexity will be O(n^2) when the sequence is ordered

Generally, the method of selecting the number of digits in several numbers is adopted, such as the method of taking the median at the head, middle and tail:

subset division

Stop and continue the exchange. Although there will be many useless exchanges, each principal element can be placed in a relatively middle position, with a complexity of O (nlogn). If the exchange is not continued, the principal element will always be at one end, and the previous O(n^2) will appear.

Because quick sort adopts recursive method, there will be many additional operations in the middle of recursion, such as entering and leaving the stack

code:

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;

void Swap( int *a, int *b )
{
     int t = *a;
     *a = *b;
     *b = t;
}

void insertSort(int arr[],int len){
    int position,value;
    for(int i=1;i<len;i++){
        //By default, the first element is arranged, and the search insertion starts from the second element
        position=i;
        value=arr[position];
        //Record the information of the current element to be inserted
        while(position>0&&arr[position-1]>value) //From small to large row
        {
            arr[position]=arr[position-1];//Big move back
            position--;
        }
        //Finally, the position to be inserted is determined
        arr[position]=value;
    }
}

int Median3( int A[], int Left, int Right )
{
    int Center = (Left+Right) / 2;
    if ( A[Left] > A[Center] )
        Swap( &A[Left], &A[Center] );
    if ( A[Left] > A[Right] )
        Swap( &A[Left], &A[Right] );
    if ( A[Center] > A[Right] )
        Swap( &A[Center], &A[Right] );
    /* A [left] < = a [center] < = a [right] */
    Swap( &A[Center], &A[Right-1] ); /* Hide the datum Pivot to the right*/
    /* Just consider A[Left+1]... A[Right-2] */
    return  A[Right-1];  /* Return to base Pivot */
}

void Qsort( int A[], int Left, int Right )
{ /* Core recursive function */
     int Pivot, Cutoff=100, Low, High;

     if ( Cutoff <= Right-Left ) { /* If there are enough sequence elements, enter the fast row */
          Pivot = Median3( A, Left, Right ); /* Selected datum */
          Low = Left; High = Right-1;
          while (1) { /*Move the smaller one in the sequence to the left of the benchmark and the larger one to the right*/
               while ( A[++Low] < Pivot ) ;
               while ( A[--High] > Pivot ) ;
               if ( Low < High ) Swap( &A[Low], &A[High] );
               else break;
          }
          Swap( &A[Low], &A[Right-1] );   /* Change the datum to the correct position */
          Qsort( A, Left, Low-1 );    /* Recursive solution left */
          Qsort( A, Low+1, Right );   /* Recursive solution right */
     }
     else insertSort( A+Left, Right-Left+1 ); /* There are too few elements. Use simple sorting */
}

void QuickSort( int A[], int N )
{ /* Unified interface */
     Qsort( A, 0, N-1 );
}

Method of directly calling quick sort function

/* Quick sort - call library functions directly */

#include <stdlib.h>

/*---------------Simple integer sorting--------------------*/
int compare(const void *a, const void *b)
{ /* Compares two integers. Non descending */
    return (*(int*)a - *(int*)b);
}
/* Call interface */ 
qsort(A, N, sizeof(int), compare);
/*---------------Simple integer sorting--------------------*/


/*--------------- In general, a key value in the structure Node is sorted---------------*/
struct Node {
    int key1, key2;
} A[MAXN];
 
int compare2keys(const void *a, const void *b)
{ /* Compare the two key values: arrange them in non ascending order by key1; if key1 is equal, arrange them in non descending order by key2 */
    int k;
    if ( ((const struct Node*)a)->key1 < ((const struct Node*)b)->key1 )
        k = 1;
    else if ( ((const struct Node*)a)->key1 > ((const struct Node*)b)->key1 )
        k = -1;
    else { /* If key1 is equal */
        if ( ((const struct Node*)a)->key2 < ((const struct Node*)b)->key2 )
            k = -1;
        else
            k = 1;
    }
    return k;
}
/* Call interface */ 
qsort(A, N, sizeof(struct Node), compare2keys);
/*--------------- In general, a key value in the structure Node is sorted---------------*/

Another way is to use the first as the center, but the algorithm efficiency is not as good as the previous way:

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;

int pivotP(int arr[],int low,int high){
    int pivot=arr[low];
    while(low<high){
        while(low<high&&arr[high]>=pivot) high--;
        arr[low]=arr[high];
        while(low<high&&arr[low]<=pivot) low++;
        arr[high]=arr[low];
    }
    arr[low]=pivot;
    return low;
}
void qsort(int arr[],int low,int high){
    if(low>=high) return;
    int pivot=pivotP(arr,low,high);
    qsort(arr,low,pivot-1);
    qsort(arr,pivot+1,high);
}
void quickSort(int arr[],int len){
qsort(arr,0,len-1);
}

Stability: unstable, which is closely related to the selection of pivot
Applicable occasion: applied to large volume data

Time complexity: O(nlogn)

Heap sort

Idea: construct a heap, the root node is the maximum value, and each parent node is the maximum value in the subtree. After finding the maximum value of the root node, exchange the position with the last element of the heap, readjust the heap to ensure that the root node is the maximum value, exchange the position with the last element that is not sorted, and repeat until the sequence sorting is completed.

Description:
(take the maximum heap as an example)
1. First construct an unordered heap with the top value of the heap as the maximum value
Method: starting from the last parent node, make sure that the parent node of each small heap is the maximum, and move forward to the root node.

2. Exchange the highest value of the heap top and the arranged previous element, adjust the non arranged elements to construct the disordered heap with the highest value, then exchange the heap top and the arranged previous element, and repeat the previous steps until the heap is orderly.

Time complexity: O(nlogn)

Applicable scenarios:
Heap sorting will incur large overhead in the process of creating and adjusting heap, which is not cost-effective when there are few elements. However, when there are many elements, it is a good choice. Especially when solving problems such as "the first n large numbers", it is almost the preferred algorithm.

code:

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;
void max_heapify(int arr[],int start,int tail){
    int dad=start;
    int son=2*dad+1;
    while(son<=tail){
        if(son+1<=tail&&arr[son+1]>arr[son]) son++;
        if(arr[dad]>arr[son]) return;
        else{
            swap(arr[son],arr[dad]);
            dad=son;
            son=2*dad+1;
        }
    }
}
void heap_sort(int arr[],int len){
    //Initialize and construct the original heap
    for(int i=len/2-1;i>=0;i--){
        max_heapify(arr,i,len-1);
    }
    //The top element of the heap exchanges positions with the last element that is not arranged
    for(int i=len-1;i>=0;i--){
        swap(arr[0],arr[i]);
        max_heapify(arr,0,i-1);
    }
}

Heap sorting algorithm is more efficient

Shell Sort

Idea: it makes use of the simplicity of insertion sorting to overcome the disadvantage that insertion sorting only exchanges two adjacent elements at a time.

Time complexity:
Worst case reachability: O(n^2)

Description:
1. First define an incremental sequence, DM > DM-1 >... > D1 = 1
2. Sort each Dk with a Dk interval
be careful:
a. Small interval sorting does not destroy large interval sorting. After Dk-1 interval sorting, Dk sorting is still orderly


b. Incremental elements are not coprime, and small increments may not work at all

code:

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;

void shellsort(int arr[],int len){
int sedgewick[]={929, 505, 209, 109, 41, 19, 5, 1};
int si,d,p,i,temp;
for(si=0;sedgewick[si]>=len;si++);//The initial increment sequence cannot exceed the length of the sequence to be queued
for(d=sedgewick[si];d>0;d=sedgewick[++si]){
    for(p=d;p<len;p++){
        temp=arr[p];
        for(i=p;i>=d&&arr[i-d]>temp;i-=d)
            arr[i]=arr[i-d];
        arr[i]=temp;

    }
}

}

Applicable scenario: the complexity of hill sorting algorithm is much higher than that of other nlogn algorithms. It is suitable for small and medium-sized data scale scenarios.

Count sort

Thought:
Counting sorting is not a comparative sorting algorithm. The algorithm was proposed by Harold H. Seward in 1954. The time complexity is reduced to O(N) by counting

Although it can reduce the time complexity of the sorting algorithm to O(N), there are two preconditions to be met: one is that the elements to be sorted must be integers, and the other is that the values of the sorting elements must be within a certain range and concentrated. Only when these two conditions are met can the advantages of counting sorting be brought into full play.

Description:
1. Find out the largest and smallest elements in the array to be sorted;
2. Each value in the statistical array is min_ The number of occurrences of the element num + i is stored in the ith item of array C;
3. Reverse import array

void countsort(long arr[],int len){
    if(len==0||len==1) return;
    int min_num=arr[0], max_num=arr[0];
    for(int i=1;i<len;i++){
            if(arr[i]>max_num) max_num=arr[i];
            if(arr[i]<min_num) min_num=arr[i];
    }
    long countn[max_num-min_num+1]={};
    for(int i=0,temp;i<len;i++)
    {
        temp=arr[i];
        countn[temp-min_num]++;
    }
    int index=0;
    for(int i=0;i<max_num-min_num+1;i++){

        while(countn[i]>0){
            arr[index++]=i+min_num;
            countn[i]--;
        }
    }
}

Use with caution! There is a segment error in the second test point with pta. We don't know why, but the answers of other tests are correct. A question has been created and the reason is still unknown.

Stability: stable
Applicable scenario: integer, range determination, numerical comparison, centralized data

Bucket sorting

Idea: set a certain number of buckets. Each bucket represents a different value range (min max). Traverse the original array and put it into which bucket according to the value range. Generally, a linked list is used to facilitate data insertion when sorting.

If the number of buckets set is far less than N, the algorithm can reach the complexity of O(n).

Stability: it is stable at present, because it uses the idea of insertion sorting.

Applicable scenario: the data range is relatively small and the numerical distribution is relatively uniform.

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include<algorithm>
using namespace std;

struct Node
    {
        int data;
        Node *next;
    };
Node **LinkList;

 void bucketsort(int *arr, int len,int build_num,int minimum,int scope)
    {
        LinkList = (Node **)malloc(sizeof(Node *) * build_num);//Create array pointer
        float gap = float(scope)/ build_num;
        for (int i = 0; i < build_num;i++)
        {
            LinkList[i] = (Node *)malloc(sizeof(Node));
            LinkList[i]->data = 0; //The first node stores the amount of data stored
            LinkList[i]->next = NULL;
        }

        for (int i = 0; i < len;i++)
        {
            int index = (*(arr + i) - minimum - 1) / gap;//Determine the bucket where the data is located
            if(index<0) index=0;
            Node *p=(Node *)malloc(sizeof(Node));
            p->next = NULL;
            p->data = *(arr + i);
            Node *pre = LinkList[index];
            pre->data++;
            bool flag = false;
            if(pre->next!=NULL)//Skip first count node
            {
                flag = true;//Set the tag bit of non empty linked list tag to true
                pre = pre->next;//The initial pre is after counting nodes
            }
            while(pre->next != NULL&&p->data>pre->data)
                pre = pre->next;
            if(pre->next==NULL&&pre->data<p->data)
            {
                pre->next = p;
            }
            else
            {
                //The data fields of the two linked list elements are mutually placed because there is no precursor pointer, and the value of pre - > data after the while loop is greater than p - > data
                if(flag){
                    int smaller = p->data;
                    int larger = pre->data;
                    p->next = pre->next;
                    pre->next = p;
                    p->data = larger;
                    pre->data = smaller;
                }
                else
                    pre->next = p;
            }

        }

        int num=0;
        for(int i = 0; i < build_num;i++)
        {

            Node *pre=LinkList[i];
            if(pre->next!=NULL){
                    pre = pre->next;
              while(pre!=NULL)
                {
                    if(num!=len-1) cout << pre->data << " ";
                    else cout << pre->data;
                    pre = pre->next;
                    num++;
                }

            }
            else continue;
        }


    }


int main()
{
    int len;
    cin>>len;
    int *arr=(int *)malloc(len*sizeof(int));
    for(int i=0;i<len;i++)cin>>*(arr+i);
    int maximum = *max_element(arr, arr + len);
    int minimum = *min_element(arr, arr + len);
    int scope = maximum - minimum;
    bucketsort(arr,len,10,minimum,scope);
    return 0;
}

Learned: stl max_element min_element
In addition, it is the same as counting sorting. When submitted to pta, the second test point has a segment error, and others are correct.

Cardinality sort

Idea: cut integers according to different digits and compare them according to the number on each digit. Use the method of secondary bit first, and fill 0 in the high bit with fewer digits.

Description:
1. How many digits are the maximum
2. From the low order, create a radio array of numbers for each bit
3. Put those numbers in the corresponding radix array
4. String

Time complexity: O(P(N+B))
Stability: stable
Applicable scenario: integer, the data range is 10w-100w, and the efficiency is the best. It can also be applied to other keyword scenarios, such as date and solitaire.

#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;

/* Assume that the element has at most MaxDigit keywords, and the cardinality is the same Radix */
#define MaxDigit 6
#define Radix 10

/* Bucket element node */
typedef struct Node *Pnode;
struct Node{
int key;
Pnode next;
};

/* Bucket head node */
struct Headnode {
    Pnode head, tail;
};
typedef struct Headnode Bucket[Radix];

int GetDigit ( int X, int D )
{ /* Default secondary bit D=1, primary bit d < = maxdigit */
    int d, i;

    for (i=1; i<=D; i++) {
        d = X % Radix;
        X /= Radix;
    }
    return d;
}

void LSDRadixSort( int A[], int N )
{ /* Radix sort - secondary priority */
     int D, Di, i;
     Bucket B;
     Pnode tmp, p, List = NULL;

     for (i=0; i<Radix; i++) /* Initialize each bucket as an empty linked list */
         B[i].head = B[i].tail = NULL;
     for (i=0; i<N; i++) { /* Store the original sequence in the initial linked List in reverse order */
         tmp = (Pnode)malloc(sizeof(struct Node));
         tmp->key = A[i];
         tmp->next = List;
         List = tmp;
     }
     /* Let's start sorting */
     for (D=1; D<=MaxDigit; D++) { /* Cyclic processing of each bit of data */
         /* The following is the allocation process */
         p = List;
         while (p) {
             Di = GetDigit(p->key, D); /* Gets the current digit of the current element */
             /* Remove from List */
             tmp = p; p = p->next;
             /* Insert B[Di] tail */
             tmp->next = NULL;
             if (B[Di].head == NULL)
                 B[Di].head = B[Di].tail = tmp;
             else {
                 B[Di].tail->next = tmp;
                 B[Di].tail = tmp;
             }
         }
         /* The following is the collection process */
         List = NULL;
         for (Di=Radix-1; Di>=0; Di--) { /* Collect the elements of each bucket into the List in order */
             if (B[Di].head) { /* If the bucket is not empty */
                 /* Insert the whole bucket into the List header */
                 B[Di].tail->next = List;
                 List = B[Di].head;
                 B[Di].head = B[Di].tail = NULL; /* Empty bucket */
             }
         }
     }
     /* Pour the List into A [] and free up space */
     for (i=0; i<N; i++) {
        tmp = List;
        List = List->next;
        A[i] = tmp->key;
        free(tmp);
     }
}




int main()
{
   int len;
   cin>>len;
   int arr[len];
   for(int i=0;i<len;i++) cin>>arr[i];
LSDRadixSort(arr,len);
   for(int i=0;i<len;i++){
    if(i!=len-1) cout<<arr[i]<<" ";
    else cout<<arr[i];
   }
    return 0;
}

This is the second test point submitted to pta. There is a section error. Everything else is correct. I still don't know why.

summary

Quick recovery thought

1. Select Sorting: after traversing each pass, find a maximum value and put it in the back. Generally, the writing method is unstable, because the one found in the front is put in the back.
Average time complexity: O(n^2)
Worst time complexity: O(n^2)
Space complexity: O(1)
Stability: generally unstable

2. Bubble sorting: after N times of non arranged sequence, each time compare two adjacent numbers, exchange without sequence, and float the maximum value in front of the arranged sequence.
Average time complexity: O(n^2)
Worst time complexity: O(n^2)
Space complexity: O(1)
Stability: stable

3. Direct insertion sorting: generally, the first one is regarded as arranged, and the appropriate position is found from the second one until the sorting is completed.
Average time complexity: O(n^2)
Worst time complexity: O(n^2)
Space complexity: O(1)
Stability: stable

4. Hill sorting: first define an incremental sequence D, the number of DI in each interval constitutes a waiting sequence, and arrange each waiting sequence with insertion sorting until Di=1, and all elements are sorted.
ps:d is related to the selection of incremental sequence
Generally, sedgewick[]={929, 505, 209, 109, 41, 19, 5, 1};
Average time complexity: O(n^d)
Worst time complexity: O(n^2)
Space complexity: O(1)
Stability: unstable
The stability may have been destroyed in the front large increment row.

5. Quick sorting: select a center, put it on the right when it is larger than the center, and put it on the left when it is smaller than the center, then divide and rule it, and expand quick sorting on both sides separately until all sequences are completed.
Average time complexity: O(nlogn)
Worst time complexity: O(n^2)
The center of each selection is the one that should be in front of the row.
Space complexity: O(logn)
Because using recursion will have intermediate consumption
Stability: unstable
Its stability has much to do with the choice of the center

6. Heap sorting: construct the maximum heap or minimum heap. After determining that the root value is the maximum value of the sequence to be sorted, put it into the queued sequence. Then construct the maximum and minimum heap in the non queued sequence and repeat until the sorting is completed.
(using tree structure)
Average time complexity: O(nlogn)
Worst time complexity: O(nlogn)
Space complexity: O(1)
Stability: unstable

7. Merge sort:
Non recursive: first treat each element as a sequence to be arranged, first merge two adjacent elements in order to form a sequence to be arranged, and then combine them until the sorting is completed.
Recursion: first arrange two sequences, and then recursively merge and arrange subsequences.
Average time complexity: O(nlogn)
Worst time complexity: O(nlogn)
Space complexity: O(N)
Stability: stable

8. Cardinality sorting:
Find out the number of digits of the maximum number to be ranked. p is divided into p arrays. The length of k represents the number of keyword types. The secondary priority starts from the low, and puts the numbers in the position into the array until x times are completed, and then string them. (in case of integer, others can be arranged)
Average time complexity: O(p(n+k))
Worst time complexity: O(p(n+k))
Spatial complexity: O(n+k)
Stability: stable

9. Counting and sorting:
An array is established according to the range of max and min. The ith element of the array can represent the number of occurrences of min+i. finally, it is restored according to the data.
k=max-min
Average time complexity: O(n+k)
Worst time complexity: O(n+k)
Spatial complexity: O(n+k)
Stability: stable
This integer sequence is applicable when the data distribution range is small

10. Bucket sorting
Set a certain number of buckets. The buckets have order. Each bucket puts numbers in different value ranges. The buckets are sorted and concatenated. It is more convenient to insert with a linked list. You can also insert and sort with a two-dimensional array.
Integer, evenly distributed, applicable
Average time complexity: O(n+k)
Worst time complexity: O(n+k)
Spatial complexity: O(n+k)
Stability: stable

Tags: Algorithm

Posted on Thu, 14 Oct 2021 00:18:48 -0400 by mtmosier