LeetCode (7): Dynamic Programming Basics
Chen Kai BOSS

Dynamic Programming (DP) is one of the most powerful problem-solving paradigms in computer science. While it might seem intimidating at first, understanding its core principles and patterns can transform how you approach optimization problems. guide you through the fundamentals of dynamic programming, from the theoretical foundations to practical LeetCode problem-solving strategies.

What is Dynamic Programming?

Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems. The key insight is that instead of solving the same subproblem multiple times, we solve it once and store the result for future use. This approach dramatically improves efficiency for problems that exhibit two critical properties: optimal substructure and overlapping subproblems.

The term "dynamic programming" was coined by Richard Bellman in the 1950s. Interestingly, Bellman chose this name not because the method involves dynamic or changing behavior, but rather to make it sound impressive enough to secure funding. Despite its somewhat misleading name, DP has become one of the most fundamental techniques in algorithm design.

Core Principles

Optimal Substructure

A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to its subproblems. In other words, if we can solve the problem optimally by combining optimal solutions to smaller problems, then the problem has optimal substructure.

Consider the classic example of finding the shortest path in a graph. If the shortest path from node A to node C goes through node B, then the path from A to B must be the shortest path from A to B, and the path from B to C must be the shortest path from B to C. This property allows us to build solutions incrementally.

Example: In the Fibonacci sequence, . The optimal solution fordepends on the optimal solutions forand.

Overlapping Subproblems

A problem has overlapping subproblems if the same subproblem is solved multiple times during the computation. This is where DP shines — instead of recalculating the same values repeatedly, we store results in a table (memoization) and reuse them.

Example: When computingrecursively:

- - -Notice thatandare computed multiple times. This redundancy is exactly what DP eliminates.

When to Use Dynamic Programming

DP is most effective when: 1. The problem can be broken down into smaller subproblems 2. These subproblems overlap (same subproblem appears multiple times) 3. The problem has optimal substructure (optimal solution uses optimal subproblem solutions) 4. You need to find an optimal value (maximum, minimum, count, etc.) rather than all possible solutions

The Three-Step DP Process

Every DP problem can be approached systematically using these three steps:

Step 1: Define the State

The state represents a subproblem. You need to identify:

  • What information you need to track
  • What parameters uniquely identify a subproblem
  • What the state represents (usually the answer to a subproblem)

Example: For climbing stairs, the state dp[i] represents "the number of ways to reach step".

Step 2: Find the State Transition

The state transition (recurrence relation) describes how to compute a state from previous states. This is the heart of DP — it captures the relationship between subproblems.

Example: For climbing stairs, dp[i] = dp[i-1] + dp[i-2] because you can reach stepfrom stepor step.

Step 3: Determine Base Cases and Initialization

Base cases are the smallest subproblems that can be solved directly without recursion. They prevent infinite recursion and provide starting points for building up solutions.

Example: For climbing stairs, dp[0] = 1 (one way to stay at ground level) and dp[1] = 1 (one way to reach the first step).

One-Dimensional Dynamic Programming

One-dimensional DP problems use a single array to store subproblem solutions. The state typically depends on a single parameter (usually position or index).

LeetCode 70: Climbing Stairs

LeetCode 70: You are climbing a staircase. It takessteps to reach the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?

Analysis:

  • State: dp[i] = number of ways to reach step
  • Transition: dp[i] = dp[i-1] + dp[i-2] (can come from stepor)
  • Base cases: dp[0] = 1, dp[1] = 1

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
def climbStairs(n):
if n <= 1:
return 1

dp = [0] * (n + 1)
dp[0] = 1
dp[1] = 1

for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]

return dp[n]

Time Complexity: - single pass through the array
Space Complexity: - DP array of size Space Optimization: Since we only need the last two values, we can reduce space to:

1
2
3
4
5
6
7
8
9
10
11
12
13
def climbStairs(n):
if n <= 1:
return 1

prev2 = 1 # dp[0]
prev1 = 1 # dp[1]

for i in range(2, n + 1):
current = prev1 + prev2
prev2 = prev1
prev1 = current

return prev1

LeetCode 198: House Robber

LeetCode 198: You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed. You cannot rob two adjacent houses. Determine the maximum amount of money you can rob tonight.

Analysis:

  • State: dp[i] = maximum money that can be robbed from housesto
  • Transition: At house, we have two choices:
    • Rob house: dp[i] = nums[i] + dp[i-2]
    • Skip house: dp[i] = dp[i-1]
    • Take the maximum: dp[i] = max(nums[i] + dp[i-2], dp[i-1])
  • Base cases: dp[0] = nums[0], dp[1] = max(nums[0], nums[1])

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def rob(nums):
n = len(nums)
if n == 0:
return 0
if n == 1:
return nums[0]

dp = [0] * n
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1])

for i in range(2, n):
dp[i] = max(nums[i] + dp[i-2], dp[i-1])

return dp[n-1]

Time Complexity:Space Complexity:, optimizable to Space-Optimized Version:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def rob(nums):
n = len(nums)
if n == 0:
return 0
if n == 1:
return nums[0]

prev2 = nums[0]
prev1 = max(nums[0], nums[1])

for i in range(2, n):
current = max(nums[i] + prev2, prev1)
prev2 = prev1
prev1 = current

return prev1

LeetCode 121: Best Time to Buy and Sell Stock

LeetCode 121: You are given an array prices where prices[i] is the price of a given stock on day. You want to maximize your profit by choosing a single day to buy one stock and choosing a different day in the future to sell that stock. Return the maximum profit you can achieve.

Analysis:

  • State: dp[i] = maximum profit achievable up to day
  • Transition:
    • Track the minimum price seen so far: min_price = min(min_price, prices[i])
    • Calculate profit if selling today: profit = prices[i] - min_price
    • Update maximum: dp[i] = max(dp[i-1], profit)
  • Base case: dp[0] = 0 (no profit on first day)

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
def maxProfit(prices):
if not prices:
return 0

min_price = prices[0]
max_profit = 0

for i in range(1, len(prices)):
min_price = min(min_price, prices[i])
max_profit = max(max_profit, prices[i] - min_price)

return max_profit

Time Complexity:Space Complexity: Alternative DP Approach:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def maxProfit(prices):
n = len(prices)
if n == 0:
return 0

# dp[i] = max profit up to day i
dp = [0] * n
min_price = prices[0]

for i in range(1, n):
min_price = min(min_price, prices[i])
dp[i] = max(dp[i-1], prices[i] - min_price)

return dp[n-1]

LeetCode 322: Coin Change

LeetCode 322: You are given an integer array coins representing coins of different denominations and an integer amount representing a total amount of money. Return the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return.

Analysis:

  • State: dp[i] = minimum number of coins needed to make amount
  • Transition: For each coin denomination, try using it:
    • dp[i] = min(dp[i], dp[i - coin] + 1) for all valid coins
  • Base case: dp[0] = 0 (0 coins needed for amount 0)

Solution:

1
2
3
4
5
6
7
8
9
10
11
def coinChange(coins, amount):
# Initialize with amount + 1 (impossible value)
dp = [amount + 1] * (amount + 1)
dp[0] = 0

for i in range(1, amount + 1):
for coin in coins:
if coin <= i:
dp[i] = min(dp[i], dp[i - coin] + 1)

return dp[amount] if dp[amount] != amount + 1 else -1

Time Complexity:Space Complexity:

Two-Dimensional Dynamic Programming

Two-dimensional DP problems use a 2D table where the state depends on two parameters. These are often more complex but follow the same principles.

LeetCode 1143: Longest Common Subsequence

LeetCode 1143: Given two strings text1 and text2, return the length of their longest common subsequence. A subsequence is a sequence that appears in the same relative order, but not necessarily contiguous.

Analysis:

  • State: dp[i][j] = length of LCS of text1[0:i] and text2[0:j]
  • Transition:
    • If text1[i-1] == text2[j-1]: dp[i][j] = dp[i-1][j-1] + 1
    • Otherwise: dp[i][j] = max(dp[i-1][j], dp[i][j-1])
  • Base cases: dp[0][j] = 0, dp[i][0] = 0 (empty string has LCS length 0)

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
def longestCommonSubsequence(text1, text2):
m, n = len(text1), len(text2)
dp = [[0] * (n + 1) for _ in range(m + 1)]

for i in range(1, m + 1):
for j in range(1, n + 1):
if text1[i-1] == text2[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])

return dp[m][n]

Time Complexity:Space Complexity:, optimizable to Space-Optimized Version (using only previous row):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
def longestCommonSubsequence(text1, text2):
m, n = len(text1), len(text2)
if m < n:
text1, text2 = text2, text1
m, n = n, m

prev = [0] * (n + 1)

for i in range(1, m + 1):
curr = [0] * (n + 1)
for j in range(1, n + 1):
if text1[i-1] == text2[j-1]:
curr[j] = prev[j-1] + 1
else:
curr[j] = max(prev[j], curr[j-1])
prev = curr

return prev[n]

LeetCode 72: Edit Distance

LeetCode 72: Given two strings word1 and word2, return the minimum number of operations required to convert word1 to word2. You have three operations permitted: insert a character, delete a character, or replace a character.

Analysis:

  • State: dp[i][j] = minimum operations to convert word1[0:i] to word2[0:j]
  • Transition:
    • If word1[i-1] == word2[j-1]: dp[i][j] = dp[i-1][j-1] (no operation needed)
    • Otherwise, take minimum of:
      • Insert: dp[i][j-1] + 1
      • Delete: dp[i-1][j] + 1
      • Replace: dp[i-1][j-1] + 1
  • Base cases:
    • dp[0][j] = j (insert all characters)
    • dp[i][0] = i (delete all characters)

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def minDistance(word1, word2):
m, n = len(word1), len(word2)
dp = [[0] * (n + 1) for _ in range(m + 1)]

# Base cases
for i in range(m + 1):
dp[i][0] = i
for j in range(n + 1):
dp[0][j] = j

for i in range(1, m + 1):
for j in range(1, n + 1):
if word1[i-1] == word2[j-1]:
dp[i][j] = dp[i-1][j-1]
else:
dp[i][j] = min(
dp[i-1][j] + 1, # delete
dp[i][j-1] + 1, # insert
dp[i-1][j-1] + 1 # replace
)

return dp[m][n]

Time Complexity:Space Complexity:, optimizable to

LeetCode (Classic): 0/1 Knapsack Problem

Problem: Given weights and values ofitems, put these items in a knapsack of capacityto get the maximum total value. Each item can be used at most once.

Analysis:

  • State: dp[i][w] = maximum value achievable with firstitems and weight capacity
  • Transition:
    • If itemdoesn't fit: dp[i][w] = dp[i-1][w]
    • Otherwise: dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i])
  • Base cases: dp[0][w] = 0 (no items, no value)

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def knapsack(weights, values, capacity):
n = len(weights)
dp = [[0] * (capacity + 1) for _ in range(n + 1)]

for i in range(1, n + 1):
for w in range(1, capacity + 1):
if weights[i-1] > w:
dp[i][w] = dp[i-1][w]
else:
dp[i][w] = max(
dp[i-1][w],
dp[i-1][w - weights[i-1]] + values[i-1]
)

return dp[n][capacity]

Time Complexity:Space Complexity:, optimizable to Space-Optimized Version (using 1D array, iterating backwards):

1
2
3
4
5
6
7
8
9
10
def knapsack(weights, values, capacity):
n = len(weights)
dp = [0] * (capacity + 1)

for i in range(n):
# Iterate backwards to avoid using updated values
for w in range(capacity, weights[i] - 1, -1):
dp[w] = max(dp[w], dp[w - weights[i]] + values[i])

return dp[capacity]

Space Optimization Techniques

Space optimization is crucial for DP problems, especially in competitive programming where memory constraints matter. Here are common techniques:

1. Rolling Array (1D to Variables)

When only the last few values are needed, replace arrays with variables:

1
2
3
4
5
6
7
# Before: O(n) space
dp = [0] * (n + 1)
dp[i] = dp[i-1] + dp[i-2]

# After: O(1) space
prev2, prev1 = 0, 1
current = prev1 + prev2

2. 2D to 1D (Row-by-Row Processing)

When only the previous row is needed, use a 1D array:

1
2
3
4
5
6
7
# Before: O(m*n) space
dp = [[0] * (n + 1) for _ in range(m + 1)]

# After: O(n) space
prev = [0] * (n + 1)
curr = [0] * (n + 1)
# Process row by row

3. Backward Iteration for 1D Optimization

For knapsack-like problems, iterate backwards to avoid overwriting values:

1
2
3
4
5
6
7
8
9
# Forward iteration (wrong for knapsack)
for w in range(weights[i], capacity + 1):
dp[w] = max(dp[w], dp[w - weights[i]] + values[i])
# dp[w - weights[i]] might have been updated in this iteration!

# Backward iteration (correct)
for w in range(capacity, weights[i] - 1, -1):
dp[w] = max(dp[w], dp[w - weights[i]] + values[i])
# dp[w - weights[i]] hasn't been updated yet

DP vs Recursion vs Memoization

Understanding the relationship between these concepts is crucial:

Pure Recursion

Recursive solution without optimization:

1
2
3
4
def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)

Time Complexity:(exponential)
Space Complexity:(call stack)

Memoization (Top-Down DP)

Store results of recursive calls:

1
2
3
4
5
6
7
def fib(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]

Time Complexity:Space Complexity:(memo + call stack)

Tabulation (Bottom-Up DP)

Build solution iteratively from base cases:

1
2
3
4
5
6
7
8
def fib(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[0], dp[1] = 0, 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]

Time Complexity:Space Complexity:, optimizable to

Comparison

Approach Time Space Pros Cons
Recursion Intuitive, natural Exponential time
Memoization Natural recursion, avoids recomputation Stack overflow risk
Tabulation or No stack overflow, space optimizable Less intuitive

When to use each:

  • Recursion: Only for small inputs or when clarity matters more than performance
  • Memoization: When the recursive structure is natural and you want to preserve it
  • Tabulation: Preferred for competitive programming, better space optimization, no stack overflow

Complexity Analysis

Time Complexity

For DP problems, time complexity is typically:

  • 1D DP:orwhereis the number of choices per state
  • 2D DP:or
  • General:

Space Complexity

  • Original:
  • Optimized: Often reducible tofor 1D orfor 2D

Example Analysis: Coin Change

  • States:states (one for each amount from 0 to amount)
  • Work per state:(try each coin)
  • Time:
  • Space:

Common DP Patterns

Pattern 1: Linear DP

State depends on a single dimension (position, time, etc.):

  • Climbing Stairs
  • House Robber
  • Maximum Subarray

Pattern 2: Interval DP

Process intervals of increasing length:

  • Palindrome Partitioning
  • Burst Balloons
  • Matrix Chain Multiplication

Pattern 3: Tree DP

DP on trees, often with DFS:

  • House Robber III
  • Binary Tree Maximum Path Sum

Pattern 4: State Machine DP

Track multiple states simultaneously:

  • Best Time to Buy and Sell Stock (with transactions)
  • House Robber II (circular array)

Pattern 5: Digit DP

Count numbers with certain properties:

  • Count Numbers with Unique Digits
  • Numbers At Most N Given Digit Set

Q&A: Common Questions

Q1: How do I know if a problem is a DP problem?

A: Look for these signs:

  • Optimization problem (max, min, count)
  • Can be broken into subproblems
  • Subproblems overlap
  • Has optimal substructure
  • Often involves sequences, arrays, or strings

Q2: Should I use top-down (memoization) or bottom-up (tabulation)?

A:

  • Top-down: More intuitive if you think recursively, easier to implement from recursive solution
  • Bottom-up: Better for space optimization, no stack overflow risk, preferred in interviews
  • Recommendation: Learn both, but master bottom-up for competitive programming

Q3: How do I find the state transition equation?

A: 1. Identify what choices you have at each step 2. Consider all possible transitions from previous states 3. Write the recurrence relation 4. Verify with small examples

Q4: What if I can't figure out the DP state?

A:

  • Start with the most obvious state (position, index)
  • Consider what information you need to make decisions
  • Look at constraints — they often hint at state dimensions
  • Practice with simpler problems first

Q5: How do I optimize space in 2D DP?

A:

  • If only previous row is needed: use 1D array, update row by row
  • If only diagonal is needed: use variables
  • If backward dependencies: iterate backwards
  • Always check if you can reduce dimensions

Q6: Can DP solve all optimization problems?

A: No. DP requires:

  • Optimal substructure
  • Overlapping subproblems
  • Finite state space

Problems without these properties need other techniques (greedy, divide-and-conquer, etc.).

Q7: How do I debug DP solutions?

A:

  • Print the DP table after filling
  • Verify base cases
  • Check state transitions with small examples
  • Use test cases with known answers
  • Trace through the algorithm manually

Q8: What's the difference between DP and greedy algorithms?

A:

  • DP: Considers all possibilities, makes optimal choice at each step based on previous optimal solutions
  • Greedy: Makes locally optimal choice without considering future consequences
  • Key difference: Greedy doesn't reconsider past decisions; DP does

Q9: How do I handle DP with constraints (like "at most k transactions")?

A: Add a dimension to your state:

  • dp[i][k] = state at positionwith constraint
  • Update transitions to account for the constraint
  • Example: Stock problems with transaction limits

Q10: Is DP always better than recursion?

A: Not always:

  • DP is better: When subproblems overlap significantly
  • Recursion is fine: When subproblems don't overlap (divide-and-conquer)
  • Rule of thumb: If you see repeated calculations, use DP

Practice Strategy

Step 1: Master the Fundamentals

Start with classic problems: 1. Fibonacci 2. Climbing Stairs 3. House Robber 4. Coin Change

Step 2: Learn 1D Patterns

Practice these patterns:

  • Linear sequences
  • State transitions
  • Space optimization

Step 3: Master 2D DP

Focus on:

  • String DP (LCS, Edit Distance)
  • Grid DP (Unique Paths)
  • Knapsack variants

Step 4: Advanced Topics

Once comfortable:

  • Interval DP
  • Tree DP
  • State machine DP
  • Digit DP

Step 5: Problem-Solving Framework

For any DP problem: 1. Identify: Is this a DP problem? 2. State: What does dp[i] or dp[i][j] represent? 3. Transition: How do states relate? 4. Base cases: What are the smallest subproblems? 5. Implementation: Code it up 6. Optimize: Can you reduce space?

Summary

Dynamic Programming is a powerful technique that transforms exponential-time recursive solutions into polynomial-time iterative ones. The key to mastering DP is:

  1. Understand the principles: Optimal substructure and overlapping subproblems
  2. Follow the process: State definition → Transition → Base cases
  3. Practice patterns: 1D linear, 2D string/grid, knapsack, etc.
  4. Optimize space: Use rolling arrays, reduce dimensions when possible
  5. Build intuition: Start with simple problems and gradually increase difficulty

Remember, DP is not about memorizing solutions — it's about recognizing patterns and applying systematic problem-solving techniques. With practice, you'll develop the intuition to identify DP problems and construct efficient solutions.

The problems covered in this article form the foundation of dynamic programming. Master these, and you'll be well-equipped to tackle more advanced DP problems on LeetCode and in technical interviews. Keep practicing, and don't be discouraged by initial difficulties — DP becomes more intuitive with experience.

Additional Resources

  • LeetCode DP Collection: Practice problems sorted by difficulty
  • Classic DP Problems: Fibonacci, Knapsack, LCS, Edit Distance
  • Pattern Recognition: Learn to identify DP patterns quickly
  • Space Optimization: Master reducing space complexity

Happy coding, and remember: every expert was once a beginner. Keep solving problems, and DP will become second nature!

  • Post title:LeetCode (7): Dynamic Programming Basics
  • Post author:Chen Kai
  • Create time:2023-07-06 00:00:00
  • Post link:https://www.chenk.top/en/leetcode-dynamic-programming-basics/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments