preface
Officially start the algorithm!! come on.
I. measure the efficiency of the algorithm
We all pay attention to efficiency in our study and work. Similarly, computer programs pay attention to efficiency in the process of execution. But how do we know whether the efficiency of a program is good or bad?
Then we need to measure.
The efficiency of the algorithm is divided into two types:
Time
between
complex
miscellaneous
degree
and
empty
between
complex
miscellaneous
degree
\color{red} {time complexity and space complexity}
Time complexity and space complexity
Time efficiency is time complexity, which measures the efficiency of an algorithm.
Spatial efficiency is spatial complexity, which measures the additional space required by an algorithm.
II. Time complexity
In fact, the so-called time complexity is a mathematical function, which describes the running time. In our algorithm, the time it takes is directly proportional to the execution times of the statements.
in other words:
count
method
in
base
book
exercise
do
Hold
that 's ok
second
number
Just
yes
Time
between
complex
miscellaneous
degree
\color{red} {the number of basic operations executed in the algorithm is the time complexity}
The number of basic operations in the algorithm is the time complexity
On the large O-order method
When we calculate the time complexity, we don't need to be particularly accurate, just roughly. So here we use the large O-order method to calculate. The details are as follows:
1. Replace all addition constants in the running time with constant 1.
2. In the modified run times function, only the highest order term is retained.
3. If the highest order term exists and is not 1, remove the constant multiplied by this item. The result is large O-order. That is, if the highest term exists and is not 1, the previous constant can be omitted.
For large O-order, we generally pay attention to:
count
method
of
most
bad
transport
that 's ok
feeling
condition
\color{red} {worst case of algorithm}
Worst case operation of the algorithm
Then here, let's use a few examples to roughly explain it.
Example 1 – two loops
void func2(int N) { int count = 0; for (int k = 0; k < 2 * N ; k++) { count++; } int M = 10; while ((M--) > 0) { count++; } System.out.println(count); }
First, you can see that the first for loop is executed 2*N times, while the while loop is executed 10 times, so the time complexity should be exactly: 2 N + 10 \color{blue}{2N+10} 2N+10, but according to the large O-order method, only the highest term is retained, and then the previous coefficient is changed to 1, then it is: O ( N ) \color{blue}{O(N)} O(N)
Example 2 - cyclic addition
void func3(int N, int M) { int count = 0; for (int k = 0; k < M; k++) { count++; } for (int k = 0; k < N ; k++) { count++; } System.out.println(count);
As you can see here, the two for loops are executed in total M + N \color{blue}{M+N} M+N times, but according to the big O method: there are two unknowns here, so the time complexity is: O ( M + N ) \color{blue}{O(M+N)} O(M+N)
Example 3 - constant cycle
void func4(int N) { int count = 0; for (int k = 0; k < 100; k++) { count++; } System.out.println(count); }
It can be seen that a total of 100 times have been performed here, and all constant terms are finally represented by 1, so the time complexity is: O ( 1 ) \color{blue}{O(1)} O(1)
Example 4 -- bubble sorting
void bubbleSort(int[] array) { for (int end = array.length; end > 0; end--) { boolean sorted = true; for (int i = 1; i < end; i++) { if (array[i - 1] > array[i]) { Swap(array, i - 1, i); sorted = false; } } if (sorted == true) { break; } } }
As you can see, the two for loops here are nested. In the worst case, it is
(arrary.lenth -1) + (arrary.lenth -2) + (arrary.lenth -3)+...+1
Obviously, this is the addition of an equal difference sequence. Then, according to the sum formula of the equal difference sequence:
(N*(N+1))/2
According to the large O method, remove the constant term, and then the maximum term coefficient is 1, then the time complexity is:
O
(
N
2
)
\color{blue}{O(N^2)}
O(N2)
Example 5 - binary search
int binarySearch(int[] array, int value) { int begin = 0; int end = array.length - 1; while (begin <= end) { int mid = begin + ((end-begin) / 2); if (array[mid] < value) begin = mid + 1; else if (array[mid] > value) end = mid - 1; else return mid; } return -1;
Binary search is to search for ordered arrays, as shown below:
What we want to say here is for mid:mid is the subscript of the middle position of the array. We set the content of the array to be searched as key. If the key is greater than mid, we will take the right half, then set the position of the original mid to left, and mid takes the subscript of the middle position of the right half. If the key is less than the mid, take the left half, change the mid to right, and then take the middle position of the left half.
Then repeat the above operation until you find what you want.
It can be found that the operations here are actually (default worst case):
The process of dividing the array length by two, dividing by two, and dividing by two is finally equal to 1. In fact, it is: 2^N = arr.lenth, then N here is the number of operations to be performed, that is, the time complexity is: the log is based on 2 and the logarithm of arr.lenth. Time complexity, i.e ㏒ ₂ N \color{blue}{㏒₂N} ㏒₂N
Example 6 - factorial
// The time complexity of computing factorial recursive factorial? long factorial(int N) { return N < 2 ? N : factorial(N-1) * N; }
Here is to find the factorial by recursion, and then find its time complexity. We draw a graph to explain. It can be found that the number of implementations here is completely determined by N, so the complexity is O ( N ) \color{blue}{O(N)} O(N)
Example 7 - Fibonacci sequence
// Computing the time complexity of fibonacci recursive fibonacci? int fibonacci(int N) { return N < 3 ? N : fibonacci(N-1)+fibonacci(N-2); }
It may be a little difficult to understand here, so we can draw a picture directly. Let's assume that we now require fibonacci(int N). Here's N, we pass it a 6.
Then the situation is as follows:
The blue arrow here means that in order to facilitate us to calculate the time complexity, we can move the in the blue box to my arrow. In this way, we can see:
Fibonacci sequence is actually the sum of the first N terms of an equal ratio sequence. So what is the formula of the equal ratio sequence?
Here, our q is 2, so I change the positions of the subtracted and subtracted of the two subtraction operations, and then remove the constant term and the coefficient of the highest term. We can find that the final time complexity is:
O
(
2
N
)
\color{blue}{O(2^N)}
O(2N)
III. spatial complexity
Spatial complexity is a measure of the amount of storage space temporarily occupied by an algorithm during operation. The additional space mentioned here is actually the number of variables. The calculation rules of spatial complexity are similar to the practical complexity, and the large O progressive representation is also used.
Let's use a few examples to illustrate the old rules:
Example 1: bubble sorting
void bubbleSort(int[] array) { for (int end = array.length; end > 0; end--) { boolean sorted = true; for (int i = 1; i < end; i++) { if (array[i - 1] > array[i]) { Swap(array, i - 1, i); sorted = false; } } if (sorted == true) { break; } } }
Although it is said here that there are N spaces, they are all constants, which are finally merged into 1, so the space complexity is O ( 1 ) \color{blue}{O(1)} O(1)
Example 2: Fibonacci sequence
// How to calculate the spatial complexity of fibonacci? int[] fibonacci(int n) { long[] fibArray = new long[n + 1]; fibArray[0] = 0; fibArray[1] = 1; for (int i = 2; i <= n ; i++) { fibArray[i] = fibArray[i - 1] + fibArray [i - 2]; } return fibArray; }
Here, the spatial complexity is O ( N ) \color{blue}{O(N)} O(N), because N spaces are opened up dynamically
Example 3: factorial
// The time complexity of computing Factorial recursive Factorial? long factorial(int N) { return N < 2 ? N : factorial(N-1)*N; }
N spaces are opened up dynamically, so the space complexity is O ( N ) \color{blue}{O(N)} O(N)