Divide and conquer
Binary search algorithm
basic thought
Binary search algorithm is a typical strategy of divide and conquer.
Given the ordered n elements a [0: n-1], now find a specific element x.
First, it is easy to think of using the sequential search method to compare the elements in a [0: n-1] one by one until x is found or X is not found after searching the entire array. This method does not make good use of the condition that n elements have been ordered, so in the worst case, the sequential search method needs O(n) comparisons.
The binary search method makes full use of the order relationship between elements and adopts the divide and conquer strategy. In the worst case, the element x can be found in O(log n) time.
Its basic idea is: divide the n elements into two halves with roughly the same number, compare a[n/2] with X, and if a[n/2]==x, the program ends; If x < a[n/2], you only need to use this method to search in a [0: n / 2]; If x > a[n/2], you only need to use this method to search in a[n/2: n];
Pseudo code
public static int binarySearch(int []a,int x,int n) { int left=0; int right=n-1; while(left<=right) { int middle=(left+right)/2; if(x==a[middle]) return middle; if(x<a[middle]) right=middle-1; else left=middle+1; } return -1; }
It is easy to see that each time the while loop of the algorithm is executed, the array to be searched will be reduced by half. Therefore, in the worst case, the while loop is executed O(log n) times. The operation in the loop needs O(1) time, so the time complexity of the whole algorithm is O(log n) in the worst case.
Merge sort
basic thought
Merge sort algorithm is an algorithm that sorts n elements with divide and conquer strategy.
The basic idea is: divide the elements to be sorted into two subsets with roughly the same size, sort the two subsets respectively, and finally merge the ordered subsets into the required ordered set. The following figure can well understand the divide and conquer idea of the merging algorithm.
Note: Dreamcatcher CX
Pseudo code
public static void mergeSort(int []a,int left,int right) { if(left<right) { int mid=(left+right)/2; mergeSort(a,left,mid-1); mergeSort(a,mid+1,right); merge(a,b,left,mid,right);//Merge into array b copy(a,b,left,right);//Copy back to array a } } private static void merge(int[] a,int[] b,int left,int mid,int right){ int i = left; int j = mid+1; int t = 0; while (i<=mid && j<=right){ if(a[i]<=a[j]){ b[t++] = a[i++]; }else { b[t++] = a[j++]; } } while(i<=mid){//Left remaining elements b[t++] = a[i++]; } while(j<=right){//Remaining elements on the right b[t++] = a[j++]; } }
The merge method is to recursively merge a group of subsets into array b. The merge sorting algorithm sorts n elements. In the worst case, the required calculation time is T(n), which satisfies: T(n) = 2T (n / 2) + O (n) n > 1
Solving the recursive equation shows that T(n)=O(n log n). However, the time lower bound of the sorting algorithm is Ω (n log n), so the combined sorting algorithm is an asymptotically optimal algorithm.
Quick sort
basic thought
Quick sorting algorithm is another sorting algorithm based on divide and conquer algorithm.
For the input subarray a [P: R], sort as follows:
- Decomposition: Based on a[p], a[p: R] is divided into three segments a[p: Q-1], a[q] and a[q + 1: R], and any element in a[p: Q-1] is less than or equal to a[q], and any element in a[q + 1: R] is greater than or equal to a[q]. Generally speaking, those less than the benchmark are placed on the left, and those greater than the benchmark are placed on the right.
- Recursive solution: call the fast scheduling algorithm recursively to a [P: Q-1] and a [Q + 1: R].
- Merge: the sorting of a [P: Q-1], a [Q + 1: R] is performed locally, so after a [P: Q-1], a [Q + 1: R] is sorted, no calculation needs to be performed, and the original array is sorted.
Pseudo code
public static void qSort(int p,int r) { if(p<r) { int q=partition(p,r); qSort(p,q-1); qSort(q+1,r); } } public static int partition(int p,int r) { int left=p; int right=r; int x=a[left]; while(left<right) { while(left<right&&a[right]>=x) right--; if(left<right) a[left]=a[right]; while(left<right&&a[left]<=x) left--; if(left<right) a[right]=a[left]; } a[left]=x; return left; }
The key of the algorithm is that the partition is divided by the determined benchmark element a[p]. The partition method takes x=a[p] as the benchmark every time, and then the left and right shifts, starting from the right. If a[right] is greater than or equal to the benchmark element x, right continues to shift; If a[right] < x, the current a[right] is assigned to a[left], because the left does not start to shift and does not move at this time, so it is sorted in place. After the assignment is completed, left starts to move, similarly.
The two regions generated by the worst case division process of quick sorting contain n-1 elements and 1 element respectively, and this asymmetric division occurs every time, then T(n)=T(n-1)+O(n); In the worst case, T(n)=O(n^2);
The time complexity of fast scheduling in the best and average cases is O(n log n);
Quick sort algorithm is an unstable algorithm.
Linear time selection
basic thought
The problem of element selection is generally: given that a linear sequence has n elements and an integer k, find the K smallest element in the n elements.
The linear time selection algorithm is actually designed to imitate the fast sorting algorithm. Its basic idea is to recursively partition the input array. Randomly select a subscript i as the benchmark a[i], and put those less than a[i] on the left and those greater than a[i] on the right. j is the number of elements on the left after division, so you only need to compare the size of K and j. if K < = j, it means that the element with the smallest K must be on the left of the benchmark. Next, you only need to recursively find the element with the smallest K in the left half; Similarly, if k > j, it means that the element with the smallest K is on the right side of the benchmark, then recursively find the element with the smallest k-j in the right half.
Pseudo code
public static int randomizedSelect(int p,int r,int k) { int i=randomizedPartition(p,r); int j=i-p+1; if(k<=j) return randomizedSelect(p,i,k); else return randomizedSelect(i+1,r,k-j) } public static int randomizedPartition(int p,int r) { int i=random(p,r); MyMath.swap(a,i,p); return partiton(p,r); }
It can be seen that in the worst case, randomizedSelect needs Ω (n^2) calculation time. The partition function here is consistent with the partition function of the quick sort algorithm. Since random number generator random is used in random partition algorithm, which can randomly generate a random integer between p and r, the partition benchmark generated by random partition is random. Under this condition, it can be proved that algorithm random select can find the k-th smallest number of N input elements in O(n) average time.
Maximum sub segment sum
Problem Description:
Given the sequence A1, A2,..., an composed of n integers (negative), find the maximum value of the sub segment sum of the sequence shape such as Σ ak. When all integers are negative integers, the maximum sub segment sum is defined as 0. According to this definition, the optimal value is
For example, when (A1, A2,..., a6) = - 2, 11, - 4, 13, - 5, - 2), the maximum sub segment sum is 20 (11, - 4, 13).
basic thought
If the given sequence a [1: n] is divided into two segments a [1: n / 2] and a [n / 2 + 1: n] with the same length, and the maximum sub segment sum of the two segments is calculated respectively, the maximum sub segment sum of a [1: n] is divided into three cases:
- The maximum sub segment of a [1: n] is the same as that of a [1: n / 2].
- The maximum sub segment of a [1: n] is the same as that of a [n / 2: n].
- The maximum sub segment sum of a [1: n] is equal to the sum of the sub segment sum at a [1: n / 2] and the sub segment sum at a [n / 2 + 1: n]
The cases of 1 and 2 can be obtained directly recursively. In the third case, a[n/2] and a[n/2+1] are in the optimal subsequence. We only need to calculate the maximum subsets and s1 from n/2 on the left a [1: n/2] and n/2+1 on the right a[n/2+1: n], and then add the two, s=s1+s2; s is the optimal value of the third case.
Pseudo code
public int MaxSubSum(int []a,int left,int right) { if(left==right) return a[left]>0?a[left]:sum; else { int mid=(left+right)/2; int leftSum=MaxSubSum(a,left,mid); int rightSum=MaxSubSum(a,mid+1,right); int s1=0; int tempS=0; for(int i=mid;i>=left;i--) { tempS+=a[i]; if(tempS>s1) s1=tempS; } int s2=0; tempS=0; for(int i=mid+1;i<=right;i++) { tempS+=a[i]; if(tempS>s2) s2=tempS; } //Returns the largest of the three return max(leftSum,rightSum,s1+s2); } }
T(n)=2T(n/2)+O(n), so the time complexity is O(n log n)