# 0

For any data structure, its basic operation is no more than traversal + access, and more specifically: add, delete, check and change.

There are many kinds of data structures, but they exist for the purpose of adding, deleting, and altering as efficiently as possible in different scenarios. Is that not the mission of data structures?

How to traverse + access? We still see from the top level that there are no two forms of traversal + access for various data structures: linear and non-linear.

Linearity is represented by for/while iteration and non-linearity is represented by recursion.

Array traversal framework, typical linear iteration structure:

```void traverse(int[] arr) {
for (int i = 0; i < arr.length; i++) {
// Iterative access arr[i]
}
}
```

Chain list traversal framework with both iterative and recursive structure:

```/* Basic Single Chain List Nodes */
class ListNode {
int val;
ListNode next;
}

for (ListNode p = head; p != null; p = p.next) {
}
}

}
```

Binary tree traversal framework, typical non-linear recursive traversal structure:

```/* Basic Binary Tree Node */
class TreeNode {
int val;
TreeNode left, right;
}

void traverse(TreeNode root) {
traverse(root.left);
traverse(root.right);
}
```

Look at the recursive traversal of binary trees and the recursive traversal of chain lists. Are they similar? Look at the structure of binary trees and single-chain lists again. Are they similar? If there are more forks, N-fork trees?

The binary tree framework can be extended to an N-tree traversal framework:

```/* Basic N-Fork Tree Node */
class TreeNode {
int val;
TreeNode[] children;
}

void traverse(TreeNode root) {
for (TreeNode child : root.children)
traverse(child);
}
```

N-fork tree traversal can also be expanded to graph traversal, because the graph is the combination of several N-fork trees. Do you think the graph may have rings? This is a good idea, just mark it with a Boolean array visited, and no code will be written here.

# Dynamic Planning Framework

Firstly, the general form of dynamic programming problem is to seek the maximum value. Dynamic planning is actually an optimization method of operational research, but it has been applied more on computer problems, such as asking you for the longest incremental subsequence, minimum editing distance, and so on.

Now that you are asking for the most value, what is the core question? The core question for solving dynamic programming is to enumerate. Because you are asking for the most value, you must enumerate all the possible answers and find the most value among them.

Dynamic planning is so simple, it's all done? I see very difficult dynamic planning problems!

First, the enumeration of dynamic planning is a bit special, because there are "overlapping subproblems" in such problems, which can be extremely inefficient if violence is exhausted, so a "memo" or "DP table" is needed to optimize the enumeration process and avoid unnecessary calculations.

Moreover, the dynamic programming problem must have an "optimal sub-structure" in order to get the best value of the original problem through the sub-problem.

In addition, although the core idea of dynamic planning is to exhaust the maximum value, the problem can be ever-changing. It is not easy to exhaust all feasible solutions. Only by listing the correct "state transfer equation" can the problem be exhausted correctly.

The overlapping sub-problems, optimal sub-structures and state transition equations mentioned above are the three elements of dynamic programming. What do they mean? They will be explained in detail with examples, but in the actual algorithm problems, it is the most difficult to write state transition equations. This is why many friends find dynamic programming difficult. Let me provide a frame of thought that I have developed, supplemented byHelps you think about the state transfer equation:

Explicit base case ->Explicit state ->Explicit selection ->Define the meaning of dp arrays/functions.

Follow the above routine and the final result will set the framework:

```# Initialize base case
dp[...] = base
# Conduct state transition
for State 1 in All values of state 1:
for State 2 in All values of state 2:
for ...
dp[State 1][State 2][...] = Maximum(Select 1, Select 2...)
```

# Backtrace algorithm framework

Solving a backtrace problem is actually a process of traversing a decision tree. You only need to think about three questions:

2. Choice list: that is, the choices you can make now.

3. End condition: the condition that reaches the bottom of the decision tree and no more choices can be made.

If you don't understand the explanations for these three words, that's okay. We'll use two classic backtracking algorithm questions, Full Arrangement and Queen N, to help you understand what these words mean. Now you'll leave an impression.

On the code side, the framework of backtracking algorithm:

```result = []
def backtrack(Route, Selection List):
if Satisfies the end condition:
return

for Choice in Selection List:
Make a choice
backtrack(Route, Selection List)
Undo Selection
```

Its core is the recursion within the for loop, which "makes a selection" before a recursive call and "undoes the selection" after a recursive call.

# BFS algorithm framework

To put it in perspective, let's start with a common scenario of BFS. OK, the essence of the problem is that you're in a "picture."Find the closest distance from the start to the end. This example sounds dull, but the BFS algorithm problems are all about doing it. Get the dull nature clear, and then enjoy the packaging of the problems to get the results.

This generalized description can have a variety of variations, such as walking a maze, some of the lattices are not walkable, what is the shortest distance from the start to the end? If this maze belt "portal" can be transmitted instantly?

Essentially, it's a "picture" that lets you go from start to end and ask the shortest path. This is the nature of BFS. Just have a clear framework of direct dictation.

```// Calculate the closest distance from start to end target
int BFS(Node start, Node target) {
Queue<Node> q; // Core Data Structure
Set<Node> visited; // Avoid going back

q.offer(start); // Queue starting point
int step = 0; // Record diffusion steps

while (q not empty) {
int sz = q.size();
/* Spread all nodes in the current queue around */
for (int i = 0; i < sz; i++) {
Node cur = q.poll();
/* Focus: This is where you decide if you have reached the end point */
if (cur is target)
return step;
/* Queue adjacent nodes of cur */
if (x not in visited) {
q.offer(x);
}
}
/* Focus: Update steps here */
step++;
}
}
```

Queue q aside, the core data structure of BFS; cur.adj() Generally refers to the nodes adjacent to cur, for example, in a two-dimensional array, the locations of the upper and lower sides of cur are adjacent nodes. The main function of visited is to prevent backtracking, which is necessary most of the time, but like a general binary tree structure, there is no pointer from the child node to the parent node, and visited is not necessary without backtracking.

# Zero, Binary Search Framework

```int binarySearch(int[] nums, int target) {
int left = 0, right = ...;

while(...) {
int mid = left + (right - left) / 2;
if (nums[mid] == target) {
...
} else if (nums[mid] < target) {
left = ...
} else if (nums[mid] > target) {
right = ...
}
}
return ...;
}
```

One of the tricks of analytic dichotomy is not to see else, but to write everything clearly in else if so that all the details can be clearly shown. This article uses else if to make it clear that readers can simplify themselves after understanding it.

The part marked by.. is where the details may be problematic. When you see a binary lookup code, pay attention to these places first. The following is an example of how these places can change.

Also, it is important to avoid overflow when calculating mid. The result of left + (right - left) / 2 in the code is the same as that of (left + right) / 2, but it effectively prevents the overflow caused by the direct addition of left and right.

# Sliding window algorithm framework

When it comes to the sliding window algorithm, many readers have a headache. The idea of this algorithm technique is very simple: to maintain a window, keep sliding, and then update the answer. There are at least 10 topics on LeetCode that use the sliding window algorithm, which are both moderate and difficult. The general logic of this algorithm is as follows:

```int left = 0, right = 0;

while (right < s.size()) {
// Increase Window
right++;

while (window needs shrink) {
// contract the window
window.remove(s[left]);
left++;
}
}
```

The time complexity of this algorithm technique is O(N), which is much more efficient than string violence.

It's not the idea of the algorithm that bothers you, it's the details. For example, how to add new elements to the window, how to shrink the window, and at which stage of the window's sliding to update the results. Even if you understand these details, it's easy to get bugs. It's really annoying that you don't know how to find bugs.

So today I'm writing a code framework for the sliding window algorithm. I've written debug for you even where to output it. If you encounter problems in the future, you can write out the following framework silently and then change it to three places without bugs:

```/* Sliding window algorithm framework */
void slidingWindow(string s, string t) {
unordered_map<char, int> need, window;
for (char c : t) need[c]++;

int left = 0, right = 0;
int valid = 0;
while (right < s.size()) {
// c is the character that will be moved into the window
char c = s[right];
// Move Window Right
right++;
// Perform a series of updates to the data in the window
...

/*** debug Location of output***/
printf("window: [%d, %d)\n", left, right);
/********************/

// Determine if the left window will shrink
while (window needs shrink) {
// d is the character that will be moved out of the window
char d = s[left];
// Move Window Left
left++;
// Perform a series of updates to the data in the window
...
}
}
}
```

Two of them... represent the place where the window data is updated, so you can just fill them in.

Moreover, the operations at these two... Are right-moving and left-moving window update operations respectively, and you will find that they are perfectly symmetrical.

As an off-topic remark, I find that many people like to be obsessed with appearances and dislike exploring the nature of the problem. For example, many people comment on my framework and say that hashes are slow, rather than using arrays instead of hashes. Many people also like to write very short code, saying that I have too much code to compile, which is not fast enough on LeetCode.

I'm convinced. The algorithm looks at time complexity, so you can make sure you have the best time complexity. As for LeetCode's so-called running speed, it's all basics. As long as it's not too slow, it's not worth optimizing from the compilation level. Don't let go of the book...

The labuladong Public Number focuses on algorithmic thinking. You take the framework into account and change the code as you go. You're happy.

To get it right, here are four LeetCode themes to set this framework. The first one will explain its principles in detail, and the last four will close their eyes and kill in seconds.

Because sliding windows are often dealing with string-related problems and Java is inconvenient in dealing with strings, this code is implemented in C++. No programming magic will be used, but a brief introduction to some of the data structures used will be provided so that some readers will not be hampered by language details to understand the algorithm idea:

unordered_map is a hash table (dictionary), and one of its methods, count(key), is equivalent to Java's containsKey(key) to determine whether a key exists.

You can use square brackets to access the value map[key]. It is important to note that if the key does not exist, C++ will automatically create it and assign the map[key] a value of 0.

So the multiple occurrences of map[key]++ in the code are equivalent to Java's map.put(key, map.getOrDefault(key, 0) + 1).

# Framework of stock trading algorithm

Now, we have completed the most difficult step in dynamic planning: the state transition equation. ** If you can understand the previous content, then you can kill all the problems in seconds, as long as you set up this framework. ** But the last point is to define the base case, which is the simplest case.

```dp[-1][k] = 0
Explanation: Because i It starts at zero, so i = -1 That means it hasn't started yet, and the profit at that time is of course 0.
dp[-1][k] = -infinity
Explanation: It is impossible to hold stocks before you start.
Since our algorithm requires a maximum value, the initial value is set to a minimum value to facilitate the maximum value.
dp[i] = 0
Explanation: Because k It starts with 1, so k = 0 That means no trading is allowed at all, at which point the profit is of course zero.
dp[i] = -infinity
Interpretation: It is not possible to hold shares when no trading is allowed.
Since our algorithm requires a maximum value, the initial value is set to a minimum value to facilitate the maximum value.
```

To summarize the above state transfer equation:

```base case:
dp[-1][k] = dp[i] = 0
dp[-1][k] = dp[i] = -infinity

State transfer equation:
dp[i][k] = max(dp[i-1][k], dp[i-1][k] + prices[i])
dp[i][k] = max(dp[i-1][k], dp[i-1][k-1] - prices[i])
```

Readers may ask how this array index is programmed to be expressed as -1 and how negative infinity is represented. These are details and there are many ways to implement them. Now that the complete framework is complete, let's begin to materialize.

# Framework of the Robbery Algorithm

In both choices, each time you choose a larger result, you end up with the most money you can grab:

Thieves have many choices to get to this location. Wouldn't it be a waste of time if they go into recursion every time they get there? So there are overlapping sub-problems that can be optimized with a memo:

```private int[] memo;
// Principal function
public int rob(int[] nums) {
// Initialization Memo
memo = new int[nums.length];
Arrays.fill(memo, -1);
// The robbers started robbing from the 0th house
return dp(nums, 0);
}

// Returns the maximum dp[start.]can grab
private int dp(int[] nums, int start) {
if (start >= nums.length) {
return 0;
}
// Avoid duplicate calculations
if (memo[start] != -1) return memo[start];

int res = Math.max(dp(nums, start + 1),
nums[start] + dp(nums, start + 2));
// Write in a memo
memo[start] = res;
return res;
}
```

This is the top-down dynamic programming solution, and we can also slightly modify it to write the bottom-up solution.

We also found that state transition is only related to the two most recent states of dp[i], so it can be further optimized to reduce the spatial complexity to O(1).

```int rob(int[] nums) {
int n = nums.length;
// Record dp[i+1] and dp[i+2]
int dp_i_1 = 0, dp_i_2 = 0;
// Record dp[i]
int dp_i = 0;
for (int i = n - 1; i >= 0; i--) {
dp_i = Math.max(dp_i_1, nums[i] + dp_i_2);
dp_i_2 = dp_i_1;
dp_i_1 = dp_i;
}
return dp_i;
}
```

# nSum Problem Framework

```vector<vector<int>> twoSumTarget(vector<int>& nums, int target) {
// nums array must be ordered
sort(nums.begin(), nums.end());
int lo = 0, hi = nums.size() - 1;
vector<vector<int>> res;
while (lo < hi) {
int sum = nums[lo] + nums[hi];
int left = nums[lo], right = nums[hi];
if (sum < target) {
while (lo < hi && nums[lo] == left) lo++;
} else if (sum > target) {
while (lo < hi && nums[hi] == right) hi--;
} else {
res.push_back({left, right});
while (lo < hi && nums[lo] == left) lo++;
while (lo < hi && nums[hi] == right) hi--;
}
}
return res;
}
```

nSum function:

```/* Note: nums must be sorted before calling this function */
vector<vector<int>> nSumTarget(
vector<int>& nums, int n, int start, int target) {

int sz = nums.size();
vector<vector<int>> res;
// At least 2Sum and the array size should not be less than n
if (n < 2 || sz < n) return res;
// 2Sum is base case
if (n == 2) {
// Double pointer set of operations
int lo = start, hi = sz - 1;
while (lo < hi) {
int sum = nums[lo] + nums[hi];
int left = nums[lo], right = nums[hi];
if (sum < target) {
while (lo < hi && nums[lo] == left) lo++;
} else if (sum > target) {
while (lo < hi && nums[hi] == right) hi--;
} else {
res.push_back({left, right});
while (lo < hi && nums[lo] == left) lo++;
while (lo < hi && nums[hi] == right) hi--;
}
}
} else {
// Result of recursive calculation of (n-1)Sum when n > 2
for (int i = start; i < sz; i++) {
vector<vector<int>>
sub = nSumTarget(nums, n - 1, i + 1, target - nums[i]);
for (vector<int>& arr : sub) {
// (n-1)Sum plus nums[i] is nSum
arr.push_back(nums[i]);
res.push_back(arr);
}
while (i < sz - 1 && nums[i] == nums[i + 1]) i++;
}
}
return res;
}
```

Well, it looks like it's a long time, actually combining the previous problem solving methods, n == 2 is twoSum's two-pointer solution, n > 2 is to enumerate the first number, and then recursively call Calculate (n-1)Sum to assemble the answer.

It is important to note that the nums array must be sorted before calling this nSum function, because nSum is a recursive function. If you call the sorting function in the nSum function, every recursion will sort unnecessarily, which is very inefficient.

For example, now let's write the 4Sum question on LeetCode:

```vector<vector<int>> fourSum(vector<int>& nums, int target) {
sort(nums.begin(), nums.end());
// n is 4, quaternion calculated from nums and target
return nSumTarget(nums, 4, 0, target);
}
```

Another example is LeetCode's 3Sum problem, which finds a tuple with target == 0:

```vector<vector<int>> threeSum(vector<int>& nums) {
sort(nums.begin(), nums.end());
// n is 3, a triple calculated from nums and 0
return nSumTarget(nums, 3, 0, 0);
}
```

# Binary Tree Traversal Framework

```/* Binary Tree Traversal Framework */
void traverse(TreeNode root) {
// Pre-order traversal
traverse(root.left)
// Intermediate traversal
traverse(root.right)
// Post-order traversal
}
```

If you tell me that Quick Sort is a pre-order traversal of a binary tree and Merge Sort is a post-order traversal of a binary tree, I know you are a master of algorithms.

Why are fast sorting and merge sorting related to binary trees? Let's briefly analyze their algorithm ideas and code framework:

The logic for fast sorting is that to sort nums[lo...hi], we first find a bounding point p, make nums[lo...p-1] less than or equal to nums[p] by exchanging elements, and nums[p+1...hi] greater than nums[p], then recursively remove the new bounding point from nums[lo...p-1] and nums[p+1...hi], and the entire array is sorted.

The quick-sort code framework is as follows:

```void sort(int[] nums, int lo, int hi) {
/****** Preorder Traversal Location****/
// Construct the boundary point p by exchanging elements
int p = partition(nums, lo, hi);
/************************/

sort(nums, lo, p - 1);
sort(nums, p + 1, hi);
}
```

Construct the dividing point first, and then go to the left and right subarrays to construct the dividing point. Do you see that this is not a binary tree traversal in the preceding order?

Next to merge sort logic, to sort nums[lo...hi], we sort nums[lo...mid], then nums[mid+1...hi], and finally combine the two ordered subarrays, and the whole array is sorted.

The code framework for merging and sorting is as follows:

```void sort(int[] nums, int lo, int hi) {
int mid = (lo + hi) / 2;
sort(nums, lo, mid);
sort(nums, mid + 1, hi);

/****** Post-order Traversal Location******/
// Merge two ordered subarrays
merge(nums, lo, mid, hi);
/************************/
}
```

Sort the left and right subarrays first, then merge (similar to the logic of merging ordered chains). Do you see if this is a binary tree's postorder traversal framework? In addition, this is not the legendary divide and conquer algorithm, but it is.

The key to recursive algorithms is to clarify the definition of a function, trust it, and not jump into recursive details.

Writing algorithm questions for binary trees is based on a recursive framework. First, we need to understand what the root node does on its own, and then choose to use a prefix, middle order, and subsequent recursive framework according to the requirements of the topic.

The difficulty of a binary tree title is how to think about what each node needs to do through the requirements of the title.

# 0-1 Backpack Problem Framework

The code I wrote in C++, translated the above ideas completely, and dealt with the problem that w-wt[i-1] may be less than 0 causing the array index to cross the boundary:

```int knapsack(int W, int N, vector<int>& wt, vector<int>& val) {
// base case initialized
vector<vector<int>> dp(N + 1, vector<int>(W + 1, 0));
for (int i = 1; i <= N; i++) {
for (int w = 1; w <= W; w++) {
if (w - wt[i-1] < 0) {
// Only choose not to load the backpack in this case
dp[i][w] = dp[i - 1][w];
} else {
dp[i][w] = max(dp[i - 1][w - wt[i-1]] + val[i-1],
dp[i - 1][w]);
}
}
}

return dp[N][W];
}
```

So far, the backpack problem has been solved. In contrast, I think this is a simpler dynamic programming problem, because the derivation of state transition is natural. Basically, if you have a clear definition of the dp array, you can naturally determine the state transition.

Invasion and Deletion