Dynamic Programming (DP) is one of the most powerful problem-solving paradigms in computer science. While it might seem intimidating at first, understanding its core principles and patterns can transform how you approach optimization problems. guide you through the fundamentals of dynamic programming, from the theoretical foundations to practical LeetCode problem-solving strategies.
What is Dynamic Programming?
Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems. The key insight is that instead of solving the same subproblem multiple times, we solve it once and store the result for future use. This approach dramatically improves efficiency for problems that exhibit two critical properties: optimal substructure and overlapping subproblems.
The term "dynamic programming" was coined by Richard Bellman in the 1950s. Interestingly, Bellman chose this name not because the method involves dynamic or changing behavior, but rather to make it sound impressive enough to secure funding. Despite its somewhat misleading name, DP has become one of the most fundamental techniques in algorithm design.
Core Principles
Optimal Substructure
A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to its subproblems. In other words, if we can solve the problem optimally by combining optimal solutions to smaller problems, then the problem has optimal substructure.
Consider the classic example of finding the shortest path in a graph. If the shortest path from node A to node C goes through node B, then the path from A to B must be the shortest path from A to B, and the path from B to C must be the shortest path from B to C. This property allows us to build solutions incrementally.
Example: In the Fibonacci sequence,
Overlapping Subproblems
A problem has overlapping subproblems if the same subproblem is solved multiple times during the computation. This is where DP shines — instead of recalculating the same values repeatedly, we store results in a table (memoization) and reuse them.
Example: When computing
-
When to Use Dynamic Programming
DP is most effective when: 1. The problem can be broken down into smaller subproblems 2. These subproblems overlap (same subproblem appears multiple times) 3. The problem has optimal substructure (optimal solution uses optimal subproblem solutions) 4. You need to find an optimal value (maximum, minimum, count, etc.) rather than all possible solutions
The Three-Step DP Process
Every DP problem can be approached systematically using these three steps:
Step 1: Define the State
The state represents a subproblem. You need to identify:
- What information you need to track
- What parameters uniquely identify a subproblem
- What the state represents (usually the answer to a subproblem)
Example: For climbing stairs, the state
dp[i] represents "the number of ways to reach step
Step 2: Find the State Transition
The state transition (recurrence relation) describes how to compute a state from previous states. This is the heart of DP — it captures the relationship between subproblems.
Example: For climbing stairs,
dp[i] = dp[i-1] + dp[i-2] because you can reach step
Step 3: Determine Base Cases and Initialization
Base cases are the smallest subproblems that can be solved directly without recursion. They prevent infinite recursion and provide starting points for building up solutions.
Example: For climbing stairs, dp[0] = 1
(one way to stay at ground level) and dp[1] = 1 (one way to
reach the first step).
One-Dimensional Dynamic Programming
One-dimensional DP problems use a single array to store subproblem solutions. The state typically depends on a single parameter (usually position or index).
LeetCode 70: Climbing Stairs
LeetCode 70: You are climbing a staircase. It
takes
Analysis:
- State:
dp[i]= number of ways to reach step - Transition:
dp[i] = dp[i-1] + dp[i-2](can come from stepor ) - Base cases:
dp[0] = 1,dp[1] = 1

Solution:
1 | def climbStairs(n): |
Time Complexity:
Space Complexity:
1 | def climbStairs(n): |
LeetCode 198: House Robber
LeetCode 198: You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed. You cannot rob two adjacent houses. Determine the maximum amount of money you can rob tonight.
Analysis:
- State:
dp[i]= maximum money that can be robbed from housesto - Transition: At house
, we have two choices: - Rob house
: dp[i] = nums[i] + dp[i-2] - Skip house
: dp[i] = dp[i-1] - Take the maximum:
dp[i] = max(nums[i] + dp[i-2], dp[i-1])
- Rob house
- Base cases:
dp[0] = nums[0],dp[1] = max(nums[0], nums[1])

Solution:
1 | def rob(nums): |
Time Complexity:
1 | def rob(nums): |
LeetCode 121: Best Time to Buy and Sell Stock
LeetCode 121: You are given an array
prices where prices[i] is the price of a given
stock on day
Analysis:
- State:
dp[i]= maximum profit achievable up to day - Transition:
- Track the minimum price seen so far:
min_price = min(min_price, prices[i]) - Calculate profit if selling today:
profit = prices[i] - min_price - Update maximum:
dp[i] = max(dp[i-1], profit)
- Track the minimum price seen so far:
- Base case:
dp[0] = 0(no profit on first day)
Solution:
1 | def maxProfit(prices): |
Time Complexity:
1 | def maxProfit(prices): |
LeetCode 322: Coin Change
LeetCode 322: You are given an integer array
coins representing coins of different denominations and an
integer amount representing a total amount of money. Return
the fewest number of coins that you need to make up that amount. If that
amount of money cannot be made up by any combination of the coins,
return
Analysis:
- State:
dp[i]= minimum number of coins needed to make amount - Transition: For each coin denomination, try using
it:
dp[i] = min(dp[i], dp[i - coin] + 1)for all valid coins
- Base case:
dp[0] = 0(0 coins needed for amount 0)
Solution:
1 | def coinChange(coins, amount): |
Time Complexity:
Two-Dimensional Dynamic Programming
Two-dimensional DP problems use a 2D table where the state depends on two parameters. These are often more complex but follow the same principles.
LeetCode 1143: Longest Common Subsequence
LeetCode 1143: Given two strings text1
and text2, return the length of their longest common
subsequence. A subsequence is a sequence that appears in the same
relative order, but not necessarily contiguous.
Analysis:
- State:
dp[i][j]= length of LCS oftext1[0:i]andtext2[0:j] - Transition:
- If
text1[i-1] == text2[j-1]:dp[i][j] = dp[i-1][j-1] + 1 - Otherwise:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
- If
- Base cases:
dp[0][j] = 0,dp[i][0] = 0(empty string has LCS length 0)
Solution:
1 | def longestCommonSubsequence(text1, text2): |
Time Complexity:
1 | def longestCommonSubsequence(text1, text2): |
LeetCode 72: Edit Distance
LeetCode 72: Given two strings word1
and word2, return the minimum number of operations required
to convert word1 to word2. You have three
operations permitted: insert a character, delete a character, or replace
a character.
Analysis:
- State:
dp[i][j]= minimum operations to convertword1[0:i]toword2[0:j] - Transition:
- If
word1[i-1] == word2[j-1]:dp[i][j] = dp[i-1][j-1](no operation needed) - Otherwise, take minimum of:
- Insert:
dp[i][j-1] + 1 - Delete:
dp[i-1][j] + 1 - Replace:
dp[i-1][j-1] + 1
- Insert:
- If
- Base cases:
dp[0][j] = j(insert all characters)dp[i][0] = i(delete all characters)
Solution:
1 | def minDistance(word1, word2): |
Time Complexity:
LeetCode (Classic): 0/1 Knapsack Problem
Problem: Given weights and values of
Analysis:
- State:
dp[i][w]= maximum value achievable with firstitems and weight capacity - Transition:
- If item
doesn't fit: dp[i][w] = dp[i-1][w] - Otherwise:
dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i])
- If item
- Base cases:
dp[0][w] = 0(no items, no value)
Solution:
1 | def knapsack(weights, values, capacity): |
Time Complexity:
1 | def knapsack(weights, values, capacity): |
Space Optimization Techniques
Space optimization is crucial for DP problems, especially in competitive programming where memory constraints matter. Here are common techniques:
1. Rolling Array (1D to Variables)
When only the last few values are needed, replace arrays with variables:
1 | # Before: O(n) space |
2. 2D to 1D (Row-by-Row Processing)
When only the previous row is needed, use a 1D array:
1 | # Before: O(m*n) space |
3. Backward Iteration for 1D Optimization
For knapsack-like problems, iterate backwards to avoid overwriting values:
1 | # Forward iteration (wrong for knapsack) |
DP vs Recursion vs Memoization
Understanding the relationship between these concepts is crucial:
Pure Recursion
Recursive solution without optimization:
1 | def fib(n): |
Time Complexity:
Space Complexity:
Memoization (Top-Down DP)
Store results of recursive calls:
1 | def fib(n, memo={}): |
Time Complexity:
Tabulation (Bottom-Up DP)
Build solution iteratively from base cases:
1 | def fib(n): |
Time Complexity:
Comparison
| Approach | Time | Space | Pros | Cons |
|---|---|---|---|---|
| Recursion | Intuitive, natural | Exponential time | ||
| Memoization | Natural recursion, avoids recomputation | Stack overflow risk | ||
| Tabulation | No stack overflow, space optimizable | Less intuitive |
When to use each:
- Recursion: Only for small inputs or when clarity matters more than performance
- Memoization: When the recursive structure is natural and you want to preserve it
- Tabulation: Preferred for competitive programming, better space optimization, no stack overflow
Complexity Analysis
Time Complexity
For DP problems, time complexity is typically:
- 1D DP:
or where is the number of choices per state - 2D DP:
or - General:
Space Complexity
- Original:
- Optimized: Often reducible to
for 1D or for 2D
Example Analysis: Coin Change
- States:
states (one for each amount from 0 to amount) - Work per state:
(try each coin) - Time:
- Space:
Common DP Patterns
Pattern 1: Linear DP
State depends on a single dimension (position, time, etc.):
- Climbing Stairs
- House Robber
- Maximum Subarray
Pattern 2: Interval DP
Process intervals of increasing length:
- Palindrome Partitioning
- Burst Balloons
- Matrix Chain Multiplication
Pattern 3: Tree DP
DP on trees, often with DFS:
- House Robber III
- Binary Tree Maximum Path Sum
Pattern 4: State Machine DP
Track multiple states simultaneously:
- Best Time to Buy and Sell Stock (with transactions)
- House Robber II (circular array)
Pattern 5: Digit DP
Count numbers with certain properties:
- Count Numbers with Unique Digits
- Numbers At Most N Given Digit Set
Q&A: Common Questions
Q1: How do I know if a problem is a DP problem?
A: Look for these signs:
- Optimization problem (max, min, count)
- Can be broken into subproblems
- Subproblems overlap
- Has optimal substructure
- Often involves sequences, arrays, or strings
Q2: Should I use top-down (memoization) or bottom-up (tabulation)?
A:
- Top-down: More intuitive if you think recursively, easier to implement from recursive solution
- Bottom-up: Better for space optimization, no stack overflow risk, preferred in interviews
- Recommendation: Learn both, but master bottom-up for competitive programming
Q3: How do I find the state transition equation?
A: 1. Identify what choices you have at each step 2. Consider all possible transitions from previous states 3. Write the recurrence relation 4. Verify with small examples
Q4: What if I can't figure out the DP state?
A:
- Start with the most obvious state (position, index)
- Consider what information you need to make decisions
- Look at constraints — they often hint at state dimensions
- Practice with simpler problems first
Q5: How do I optimize space in 2D DP?
A:
- If only previous row is needed: use 1D array, update row by row
- If only diagonal is needed: use variables
- If backward dependencies: iterate backwards
- Always check if you can reduce dimensions
Q6: Can DP solve all optimization problems?
A: No. DP requires:
- Optimal substructure
- Overlapping subproblems
- Finite state space
Problems without these properties need other techniques (greedy, divide-and-conquer, etc.).
Q7: How do I debug DP solutions?
A:
- Print the DP table after filling
- Verify base cases
- Check state transitions with small examples
- Use test cases with known answers
- Trace through the algorithm manually
Q8: What's the difference between DP and greedy algorithms?
A:
- DP: Considers all possibilities, makes optimal choice at each step based on previous optimal solutions
- Greedy: Makes locally optimal choice without considering future consequences
- Key difference: Greedy doesn't reconsider past decisions; DP does
Q9: How do I handle DP with constraints (like "at most k transactions")?
A: Add a dimension to your state:
dp[i][k]= state at positionwith constraint - Update transitions to account for the constraint
- Example: Stock problems with transaction limits
Q10: Is DP always better than recursion?
A: Not always:
- DP is better: When subproblems overlap significantly
- Recursion is fine: When subproblems don't overlap (divide-and-conquer)
- Rule of thumb: If you see repeated calculations, use DP
Practice Strategy
Step 1: Master the Fundamentals
Start with classic problems: 1. Fibonacci 2. Climbing Stairs 3. House Robber 4. Coin Change
Step 2: Learn 1D Patterns
Practice these patterns:
- Linear sequences
- State transitions
- Space optimization
Step 3: Master 2D DP
Focus on:
- String DP (LCS, Edit Distance)
- Grid DP (Unique Paths)
- Knapsack variants
Step 4: Advanced Topics
Once comfortable:
- Interval DP
- Tree DP
- State machine DP
- Digit DP
Step 5: Problem-Solving Framework
For any DP problem: 1. Identify: Is this a DP
problem? 2. State: What does dp[i] or
dp[i][j] represent? 3. Transition: How do
states relate? 4. Base cases: What are the smallest
subproblems? 5. Implementation: Code it up 6.
Optimize: Can you reduce space?
Summary
Dynamic Programming is a powerful technique that transforms exponential-time recursive solutions into polynomial-time iterative ones. The key to mastering DP is:
- Understand the principles: Optimal substructure and overlapping subproblems
- Follow the process: State definition → Transition → Base cases
- Practice patterns: 1D linear, 2D string/grid, knapsack, etc.
- Optimize space: Use rolling arrays, reduce dimensions when possible
- Build intuition: Start with simple problems and gradually increase difficulty
Remember, DP is not about memorizing solutions — it's about recognizing patterns and applying systematic problem-solving techniques. With practice, you'll develop the intuition to identify DP problems and construct efficient solutions.
The problems covered in this article form the foundation of dynamic programming. Master these, and you'll be well-equipped to tackle more advanced DP problems on LeetCode and in technical interviews. Keep practicing, and don't be discouraged by initial difficulties — DP becomes more intuitive with experience.
Additional Resources
- LeetCode DP Collection: Practice problems sorted by difficulty
- Classic DP Problems: Fibonacci, Knapsack, LCS, Edit Distance
- Pattern Recognition: Learn to identify DP patterns quickly
- Space Optimization: Master reducing space complexity
Happy coding, and remember: every expert was once a beginner. Keep solving problems, and DP will become second nature!
- Post title:LeetCode (7): Dynamic Programming Basics
- Post author:Chen Kai
- Create time:2023-07-06 00:00:00
- Post link:https://www.chenk.top/en/leetcode-dynamic-programming-basics/
- Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.