LeetCode (9): Greedy Algorithms
Chen Kai BOSS

Greedy algorithms are a problem-solving paradigm that makes the locally optimal choice at each step with the hope of finding a global optimum. While seemingly simple, the challenge lies in proving that "local optimum leads to global optimum." This article systematically covers greedy algorithm fundamentals, application scenarios, and classic problems like interval scheduling, jump game, and gas station to help you build a complete greedy algorithm framework.

Series Navigation

📚 LeetCode Algorithm Series (10 articles total):

  1. Hash Tables (Two Sum, Longest Consecutive Sequence, Group Anagrams)
  2. Two Pointers (Opposite Pointers, Fast-Slow Pointers, Sliding Window)
  3. Linked List Operations (Reversal, Cycle Detection, Merging)
  4. Binary Tree Traversal & Recursion (Inorder/Preorder/Postorder, LCA)
  5. Binary Tree Traversal & Construction (Pre/In/Post-order, Level-order, Construction)
  6. Backtracking (Permutations, Combinations, Pruning)
  7. Dynamic Programming Basics (1D/2D DP, State Transition)
  8. → Greedy Algorithms (Interval Scheduling, Jump Game) ← Current Article
  9. Graph Algorithms (BFS/DFS, Topological Sort, Union-Find)
  10. Bit Manipulation & Math (Bit Operations, Math Problems)

What is a Greedy Algorithm?

Core Concept

A Greedy Algorithm is a problem-solving approach that makes the locally optimal choice at each step. It doesn't consider the global picture, focusing only on local optimality, hoping that local optimal choices lead to a global optimum.

Characteristics of Greedy Algorithms:

  1. Local Optimality: Make the best choice that looks optimal at each step
  2. Irrevocability: Once a choice is made, it's never changed (unlike backtracking)
  3. Simple & Efficient: Usually low time complexity (or)
  4. Requires Proof: Need to prove that local optimum leads to global optimum

Greedy vs Dynamic Programming vs Backtracking:

Feature Greedy Dynamic Programming Backtracking
Decision Method Local optimum Global optimum Try all possibilities
Time Complexity or or higher or
Use Cases Greedy choice property Optimal substructure + overlapping subproblems Search all solutions
Revocability Irrevocable Irrevocable Revocable

Two Key Properties

1. Greedy Choice Property

The global optimal solution can be reached through a series of local optimal (greedy) choices. In other words, we can make what appears to be the best choice, then solve the remaining subproblem.

Example: Activity selection problem - Problem: Given activities with start/end times, select maximum non-overlapping activities - Greedy strategy: Always select the activity with earliest end time - Proof: Selecting earliest-ending activity leaves most time for subsequent activities

2. Optimal Substructure

The optimal solution to the problem contains optimal solutions to its subproblems. After making a greedy choice, the original problem simplifies to a smaller similar subproblem.

Design Steps

  1. Understand the problem: Clarify goal (maximize/minimize what) and constraints
  2. Determine greedy strategy: Find the "local optimum" selection criterion for each step
  3. Prove correctness: Prove local optimum leads to global optimum (proof by contradiction, exchange argument)
  4. Implement algorithm: Usually requires preprocessing like sorting
  5. Analyze complexity: Usuallyor(sorting)

Common Greedy Strategies

  1. Sort by end time: Interval scheduling problems
  2. Sort by start time: Meeting room allocation
  3. Sort by value density: Fractional knapsack problem
  4. Sort by length: String concatenation problems
  5. Sort by difference: Task assignment problems
  6. Local adjustment: Jump game, gas station

LeetCode 435: Non-overlapping Intervals

LeetCode 435: Given an array of intervals intervals where intervals[i] = [starti, endi], return the minimum number of intervals you need to remove to make the rest non-overlapping.

Example:

  • Input: intervals = [[1,2],[2,3],[3,4],[1,3]]
  • Output: 1 (Remove [1,3], remaining [[1,2],[2,3],[3,4]] are non-overlapping)

Constraints:

- - -

Problem Analysis

The essence of non-overlapping intervals is the interval scheduling problem: select the maximum number of non-overlapping intervals from a set. Minimum intervals to remove = Total intervals - Maximum non-overlapping intervals.

Core Challenges:

  1. How to determine interval overlap: Intervalsandoverlap if and only ifand

  2. How to choose which intervals to keep: Need to find a greedy strategy

  3. How to prove greedy strategy correctness: Prove local optimum leads to global optimum

Key Insight:

Greedy Strategy: Sort intervals by end time, always select the interval with earliest end time that doesn't overlap with already selected intervals.

Why sort by end time: - Selecting earliest-ending interval leaves most space for subsequent intervals - Earlier ending allows more subsequent intervals - This is the classic "Activity Selection Problem"

Solution Approach

Algorithm Flow:

  1. Sort: Sort intervals by end time in ascending order
  2. Initialize: Select first interval (earliest ending), record current interval's end time
  3. Traverse: Starting from second interval
    • If current interval's start time ≥ last selected interval's end time: No overlap, select current
    • Otherwise: Overlap, skip current interval (needs removal)
  4. Return: Total intervals - Selected intervals = Intervals to remove

Proof of Greedy Strategy Correctness:

Using Exchange Argument:

Assume there exists optimal solutionwhere the first chosen interval is not the earliest-ending interval, but another interval().

  • Replaceinwith, getting new solution
  • Since,'s compatibility with other intervals is at least as good as's -is still optimal, and first interval is earliest-ending
  • Therefore, always selecting earliest-ending interval doesn't affect optimality

Complexity Analysis

Time Complexity: - Sorting: - Traversal: - Total: Space Complexity: - Space complexity of sorting (depends on algorithm) - Python's sort() uses Timsort, space complexity - If using in-place sorting (like quicksort), space complexity(recursion stack)

Complexity Proof:

Sorting is the bottleneck, determining total time complexity of.

Code Implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
from typing import List

class Solution:
def eraseOverlapIntervals(self, intervals: List[List[int]]) -> int:
"""
Calculate minimum intervals to remove for non-overlapping result

Algorithm Steps:
1. Sort intervals by end time in ascending order
2. Initialize: Select first interval, record end time
3. Traverse remaining intervals:
a. If current interval start time >= last end time: No overlap, select
b. Otherwise: Overlap, skip (count+1)
4. Return skipped interval count

Boundary Conditions:
- Empty array: return 0
- Single interval: return 0 (no removal needed)

Optimization Techniques:
- Sort by end time, greedily select earliest-ending interval
- Only need to record last selected interval's end time, not store all

Variable Meanings:
- intervals: Input interval list
- count: Number of kept intervals
- last_end: Last selected interval's end time
"""
if not intervals:
return 0

# Key: Sort by end time in ascending order
intervals.sort(key=lambda x: x[1])

count = 1 # Keep at least first interval
last_end = intervals[0][1] # First interval's end time

# Traverse from second interval
for i in range(1, len(intervals)):
# If current interval doesn't overlap with last
if intervals[i][0] >= last_end:
count += 1 # Select current interval
last_end = intervals[i][1] # Update end time
# Otherwise overlap, skip current interval (don't select)

# Intervals to remove = Total - Kept
return len(intervals) - count

Algorithm Principle:

Why sorting by end time is optimal:

Consider two overlapping intervalsand, where:

1
2
3
A: [----]
B: [----------]
↑ Overlap

If selecting: -ends earlier, leaves more space for subsequent intervals - Example: [0,2], [1,5], [3,6], selecting [0,2] allows [3,6]

If selecting: -ends later, occupies more time - Example: [0,2], [1,5], [3,6], selecting [1,5] prevents [3,6]

Therefore, selecting earlier-ending interval is always better.

Execution Example (intervals = [[1,2],[2,3],[3,4],[1,3]]):

1
2
3
4
5
6
7
8
9
10
After sorting: [[1,2], [2,3], [1,3], [3,4]]  # Sort by end time

Initial: count=1, last_end=2 # Select [1,2]
i=1: [2,3], start=2 >= last_end=2 ✅ No overlap
count=2, last_end=3 # Select [2,3]
i=2: [1,3], start=1 < last_end=3 ❌ Overlap, skip
i=3: [3,4], start=3 >= last_end=3 ✅ No overlap
count=3, last_end=4 # Select [3,4]

Keep 3 intervals, remove 4-3=1 interval

Common Pitfalls:

Pitfall 1: Sort by start time

1
2
3
4
5
6
7
8
# ❌ Wrong: Sort by start time
intervals.sort(key=lambda x: x[0])
# Counterexample: [[1,10],[2,3],[4,5]]
# After sorting by start time, select [1,10], can't select [2,3] and [4,5]
# But optimal is to remove [1,10], keep [2,3] and [4,5]

# ✅ Correct: Sort by end time
intervals.sort(key=lambda x: x[1])

Pitfall 2: Wrong overlap check

1
2
3
4
5
# ❌ Wrong: Use > to check non-overlap
if intervals[i][0] > last_end: # Misses [1,2] and [2,3] case

# ✅ Correct: Use >= to check non-overlap
if intervals[i][0] >= last_end: # [1,2] and [2,3] don't overlap

LeetCode 55: Jump Game

LeetCode 55: You are given an integer array nums. You are initially positioned at the array's first index, and each element in the array represents your maximum jump length at that position. Return true if you can reach the last index, or false otherwise.

Example:

  • Input: nums = [2,3,1,1,4]
  • Output: true (Jump 1 step to index 1, then 3 steps to last)
  • Input: nums = [3,2,1,0,4]
  • Output: false (Can't pass index 3's 0)

Constraints:

- -

Problem Analysis

The core of jump game is determining if we can reach the end. Key: maintaining the "farthest reachable position."

Core Challenges:

  1. Zero trap: If jump distance is 0 at some position, might not be able to proceed
  2. Maintaining farthest distance: How to efficiently update and check farthest reachable position
  3. Traversal strategy: Do we need to traverse all positions

Key Insight:

Greedy Strategy: Maintain a variable max_reach representing the farthest currently reachable position. Traverse array, continuously update max_reach. If current positionexceeds max_reach, it means unreachable.

Why greedy works: - We don't need to know exactly how to jump, just "can we reach" - As long as current position is reachable, update farthest position jumpable from that position - If farthest position ≥ array end, it's reachable

Solution Approach

Algorithm Flow:

  1. Initialize: max_reach = 0 (starting from position 0, can reach at most 0)
  2. Traverse: For each position(from 0 to n-1)
    • If: Cannot reach position, return false
    • Update: max_reach = max(max_reach, i + nums[i]) (farthest position jumpable from)
    • If max_reach >= n-1: Can already reach end, return true
  3. Return: true (traversal complete, all positions reachable)

Why is it greedy:

We don't care about specific jump path, only "farthest reachable position." At each step, expand reachable range as much as possible — this is local optimal strategy that also ensures global optimal (either reach end or can't reach).

Complexity Analysis

Time Complexity: - Only need to traverse array once - Each position visited once, each operation Space Complexity: - Only use constant number of variables (max_reach)

Complexity Proof:

Traverse array once, updating max_reach takestime each time, so total time complexity is.

Code Implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
from typing import List

class Solution:
def canJump(self, nums: List[int]) -> bool:
"""
Determine if can jump to last position

Algorithm Steps:
1. Initialize max_reach=0 (farthest currently reachable position)
2. Traverse each position i in array:
a. If i > max_reach: Cannot reach position i, return false
b. Update max_reach = max(max_reach, i + nums[i])
c. If max_reach >= n-1: Can already reach end, return true
3. Traversal complete, return true

Boundary Conditions:
- Single element: return true (already at end)
- First element is 0: max_reach=0, can't proceed, return false (unless array length is 1)

Optimization Techniques:
- Greedy: Only maintain farthest reachable position, no need to record path
- Early return: Once max_reach >= n-1, return immediately

Variable Meanings:
- max_reach: Farthest currently reachable position index
- i: Current position index
- nums[i]: Maximum jump distance from position i
"""
max_reach = 0 # Farthest currently reachable position
n = len(nums)

for i in range(n):
# If current position exceeds reachable range, unreachable
if i > max_reach:
return False

# Update farthest reachable position
# i + nums[i] represents farthest position jumpable from i
max_reach = max(max_reach, i + nums[i])

# Early return: if can already reach end
if max_reach >= n - 1:
return True

return True # Traversal complete, all positions reachable

Algorithm Principle:

Why updating max_reach = max(max_reach, i + nums[i]) is correct:

  • i is current position
  • nums[i] is maximum jumpable distance from current position
  • i + nums[i] is farthest position reachable from current position
  • max(max_reach, i + nums[i]) ensures max_reach is always "farthest position reachable so far"

Execution Example (nums = [2,3,1,1,4]):

1
2
3
4
5
Initial: max_reach = 0

i=0: nums[0]=2, max_reach = max(0, 0+2) = 2 # From position 0 can jump to position 2
i=1: nums[1]=3, max_reach = max(2, 1+3) = 4 # From position 1 can jump to position 4 ✅ Reached end
Return true

Execution Example (nums = [3,2,1,0,4]):

1
2
3
4
5
6
7
8
Initial: max_reach = 0

i=0: nums[0]=3, max_reach = max(0, 0+3) = 3 # From position 0 can jump to position 3
i=1: nums[1]=2, max_reach = max(3, 1+2) = 3 # From position 1 can jump to position 3, unchanged
i=2: nums[2]=1, max_reach = max(3, 2+1) = 3 # From position 2 can jump to position 3, unchanged
i=3: nums[3]=0, max_reach = max(3, 3+0) = 3 # From position 3 can jump to position 3, unchanged
i=4: i=4 > max_reach=3 # Position 4 exceeds reachable range ❌
Return false

Common Pitfalls:

Pitfall 1: Not checking if current position is reachable

1
2
3
4
5
6
7
8
9
# ❌ Wrong: Directly update max_reach without checking if i is reachable
for i in range(n):
max_reach = max(max_reach, i + nums[i]) # If i unreachable, this update is wrong

# ✅ Correct: Check if i is reachable first
for i in range(n):
if i > max_reach:
return False # i unreachable
max_reach = max(max_reach, i + nums[i])

Pitfall 2: Wrong early return condition

1
2
3
4
5
6
7
# ❌ Wrong: Check max_reach == n-1
if max_reach == n - 1:
return True # If max_reach > n-1 should also return true

# ✅ Correct: Check max_reach >= n-1
if max_reach >= n - 1:
return True

Variant Extension:

Variant 1: Jump Game II (Minimum Jumps)

LeetCode 45: Given array nums, guaranteed to reach last position, return minimum number of jumps to reach last position.

Approach: Use greedy + two pointers

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def jump(self, nums: List[int]) -> int:
"""
Return minimum jumps to reach last position

Greedy Strategy:
- Within current reachable range, select position that can jump farthest for next jump
- Use two pointers to maintain current jump range
"""
jumps = 0 # Jump count
current_end = 0 # Farthest position reachable by current jump
farthest = 0 # Farthest position reachable by next jump

for i in range(len(nums) - 1): # Don't need to traverse last position
# Update farthest position reachable by next jump
farthest = max(farthest, i + nums[i])

# Reached current jump's boundary
if i == current_end:
jumps += 1 # Must jump once
current_end = farthest # Update boundary

# Early return: can already reach end
if current_end >= len(nums) - 1:
break

return jumps

Summary

Core Points

  1. Essence of greedy algorithms: Make locally optimal choice at each step, hoping local optimum leads to global optimum
  2. Challenge of greedy algorithms: Proving local optimum leads to global optimum (requires mathematical proof)
  3. Common greedy strategies:
    • Sorting: Greedily select after sorting by some attribute
    • Maintain optimal value: Dynamically update and maintain optimal value (like farthest distance)
    • Local adjustment: Make optimal adjustment based on current state

Problem Checklist

Interval Scheduling:

  • ✅ Non-overlapping Intervals (sort by end time)
  • ✅ Minimum Number of Arrows to Burst Balloons (interval overlap)
  • Meeting Rooms (interval scheduling)

Jump Game:

  • ✅ Jump Game (determine if reachable)
  • ✅ Jump Game II (minimum jumps)

Gas Station:

  • ✅ Gas Station (circular array)
  • Candy Distribution (local adjustment)

Other Classic Problems:

  • Best Time to Buy and Sell Stock II (multiple transactions)
  • Assign Cookies (sorting greedy)
  • Wiggle Subsequence (local adjustment)

Common Mistakes

  1. Wrong greedy strategy: Not correctly choosing greedy criterion (like sorting by start time instead of end time)
  2. Forgetting to prove correctness: Greedy algorithms need proof that local optimum leads to global optimum
  3. Boundary condition handling: Not handling empty array, single element, etc.
  4. Early return condition: Not returning early when condition met, reducing efficiency

Learning Suggestions

  1. Understand applicable scenarios: Not all problems can use greedy, must satisfy greedy choice property and optimal substructure
  2. Learn to prove correctness: Use proof by contradiction, exchange argument, etc. to prove greedy strategy correctness
  3. Compare with DP: Some problems can use either greedy or DP, understand differences and applicable scenarios
  4. Accumulate common patterns: Interval scheduling, jump problems, allocation problems all have fixed greedy patterns

I hope this article helps you gain a deep understanding of greedy algorithms. Keep practicing to gradually build greedy algorithm intuition and problem-solving framework! 🎯

  • Post title:LeetCode (9): Greedy Algorithms
  • Post author:Chen Kai
  • Create time:2023-08-22 00:00:00
  • Post link:https://www.chenk.top/leetcode-9-greedy-algorithms/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments