Greedy algorithms are a problem-solving paradigm that makes the locally optimal choice at each step with the hope of finding a global optimum. While seemingly simple, the challenge lies in proving that "local optimum leads to global optimum." This article systematically covers greedy algorithm fundamentals, application scenarios, and classic problems like interval scheduling, jump game, and gas station to help you build a complete greedy algorithm framework.
Series Navigation
📚 LeetCode Algorithm Series (10 articles total):
- Hash Tables (Two Sum, Longest Consecutive Sequence, Group Anagrams)
- Two Pointers (Opposite Pointers, Fast-Slow Pointers, Sliding Window)
- Linked List Operations (Reversal, Cycle Detection, Merging)
- Binary Tree Traversal & Recursion (Inorder/Preorder/Postorder, LCA)
- Binary Tree Traversal & Construction (Pre/In/Post-order, Level-order, Construction)
- Backtracking (Permutations, Combinations, Pruning)
- Dynamic Programming Basics (1D/2D DP, State Transition)
- → Greedy Algorithms (Interval Scheduling, Jump Game) ← Current Article
- Graph Algorithms (BFS/DFS, Topological Sort, Union-Find)
- Bit Manipulation & Math (Bit Operations, Math Problems)
What is a Greedy Algorithm?
Core Concept
A Greedy Algorithm is a problem-solving approach that makes the locally optimal choice at each step. It doesn't consider the global picture, focusing only on local optimality, hoping that local optimal choices lead to a global optimum.
Characteristics of Greedy Algorithms:
- Local Optimality: Make the best choice that looks optimal at each step
- Irrevocability: Once a choice is made, it's never changed (unlike backtracking)
- Simple & Efficient: Usually low time complexity
(
or ) - Requires Proof: Need to prove that local optimum leads to global optimum
Greedy vs Dynamic Programming vs Backtracking:
| Feature | Greedy | Dynamic Programming | Backtracking |
|---|---|---|---|
| Decision Method | Local optimum | Global optimum | Try all possibilities |
| Time Complexity | |||
| Use Cases | Greedy choice property | Optimal substructure + overlapping subproblems | Search all solutions |
| Revocability | Irrevocable | Irrevocable | Revocable |
Two Key Properties
1. Greedy Choice Property
The global optimal solution can be reached through a series of local optimal (greedy) choices. In other words, we can make what appears to be the best choice, then solve the remaining subproblem.
Example: Activity selection problem - Problem: Given activities with start/end times, select maximum non-overlapping activities - Greedy strategy: Always select the activity with earliest end time - Proof: Selecting earliest-ending activity leaves most time for subsequent activities
2. Optimal Substructure
The optimal solution to the problem contains optimal solutions to its subproblems. After making a greedy choice, the original problem simplifies to a smaller similar subproblem.
Design Steps
- Understand the problem: Clarify goal (maximize/minimize what) and constraints
- Determine greedy strategy: Find the "local optimum" selection criterion for each step
- Prove correctness: Prove local optimum leads to global optimum (proof by contradiction, exchange argument)
- Implement algorithm: Usually requires preprocessing like sorting
- Analyze complexity: Usually
or (sorting)
Common Greedy Strategies
- Sort by end time: Interval scheduling problems
- Sort by start time: Meeting room allocation
- Sort by value density: Fractional knapsack problem
- Sort by length: String concatenation problems
- Sort by difference: Task assignment problems
- Local adjustment: Jump game, gas station
LeetCode 435: Non-overlapping Intervals
LeetCode 435: Given an array of intervals
intervals where intervals[i] = [starti, endi],
return the minimum number of intervals you need to remove to make the
rest non-overlapping.
Example:
- Input:
intervals = [[1,2],[2,3],[3,4],[1,3]] - Output:
1(Remove [1,3], remaining [[1,2],[2,3],[3,4]] are non-overlapping)
Constraints:
-
Problem Analysis
The essence of non-overlapping intervals is the interval scheduling problem: select the maximum number of non-overlapping intervals from a set. Minimum intervals to remove = Total intervals - Maximum non-overlapping intervals.
Core Challenges:
How to determine interval overlap: Intervals
and overlap if and only if and How to choose which intervals to keep: Need to find a greedy strategy
How to prove greedy strategy correctness: Prove local optimum leads to global optimum
Key Insight:
Greedy Strategy: Sort intervals by end time, always select the interval with earliest end time that doesn't overlap with already selected intervals.
Why sort by end time: - Selecting earliest-ending interval leaves most space for subsequent intervals - Earlier ending allows more subsequent intervals - This is the classic "Activity Selection Problem"
Solution Approach
Algorithm Flow:
- Sort: Sort intervals by end time in ascending order
- Initialize: Select first interval (earliest ending), record current interval's end time
- Traverse: Starting from second interval
- If current interval's start time ≥ last selected interval's end time: No overlap, select current
- Otherwise: Overlap, skip current interval (needs removal)
- Return: Total intervals - Selected intervals = Intervals to remove
Proof of Greedy Strategy Correctness:
Using Exchange Argument:
Assume there exists optimal solution
- Replace
in with , getting new solution - Since
, 's compatibility with other intervals is at least as good as 's - is still optimal, and first interval is earliest-ending - Therefore, always selecting earliest-ending interval doesn't affect optimality
Complexity Analysis
Time Complexity:sort() uses Timsort, space complexity
Complexity Proof:
Sorting is the bottleneck, determining total time complexity of
Code Implementation:
1 | from typing import List |
Algorithm Principle:
Why sorting by end time is optimal:
Consider two overlapping intervals
1 | A: [----] |
If selecting
If selecting
Therefore, selecting earlier-ending interval is always better.
Execution Example
(intervals = [[1,2],[2,3],[3,4],[1,3]]):
1 | After sorting: [[1,2], [2,3], [1,3], [3,4]] # Sort by end time |
Common Pitfalls:
Pitfall 1: Sort by start time
1 | # ❌ Wrong: Sort by start time |
Pitfall 2: Wrong overlap check
1 | # ❌ Wrong: Use > to check non-overlap |
LeetCode 55: Jump Game
LeetCode 55: You are given an integer array
nums. You are initially positioned at the array's first
index, and each element in the array represents your maximum jump length
at that position. Return true if you can reach the last
index, or false otherwise.
Example:
- Input:
nums = [2,3,1,1,4] - Output:
true(Jump 1 step to index 1, then 3 steps to last) - Input:
nums = [3,2,1,0,4] - Output:
false(Can't pass index 3's 0)
Constraints:
-
Problem Analysis
The core of jump game is determining if we can reach the end. Key: maintaining the "farthest reachable position."
Core Challenges:
- Zero trap: If jump distance is 0 at some position, might not be able to proceed
- Maintaining farthest distance: How to efficiently update and check farthest reachable position
- Traversal strategy: Do we need to traverse all positions
Key Insight:
Greedy Strategy: Maintain a variable
max_reach representing the farthest currently reachable
position. Traverse array, continuously update max_reach. If
current positionmax_reach, it means unreachable.
Why greedy works: - We don't need to know exactly how to jump, just "can we reach" - As long as current position is reachable, update farthest position jumpable from that position - If farthest position ≥ array end, it's reachable

Solution Approach
Algorithm Flow:
- Initialize:
max_reach = 0(starting from position 0, can reach at most 0) - Traverse: For each position
(from 0 to n-1) - If
: Cannot reach position , return false - Update:
max_reach = max(max_reach, i + nums[i])(farthest position jumpable from) - If
max_reach >= n-1: Can already reach end, returntrue
- If
- Return:
true(traversal complete, all positions reachable)
Why is it greedy:
We don't care about specific jump path, only "farthest reachable position." At each step, expand reachable range as much as possible — this is local optimal strategy that also ensures global optimal (either reach end or can't reach).
Complexity Analysis
Time Complexity:max_reach)
Complexity Proof:
Traverse array once, updating max_reach takes
Code Implementation:
1 | from typing import List |
Algorithm Principle:
Why updating
max_reach = max(max_reach, i + nums[i]) is
correct:
iis current positionnums[i]is maximum jumpable distance from current positioni + nums[i]is farthest position reachable from current positionmax(max_reach, i + nums[i])ensuresmax_reachis always "farthest position reachable so far"
Execution Example
(nums = [2,3,1,1,4]):
1 | Initial: max_reach = 0 |
Execution Example
(nums = [3,2,1,0,4]):
1 | Initial: max_reach = 0 |
Common Pitfalls:
Pitfall 1: Not checking if current position is reachable
1 | # ❌ Wrong: Directly update max_reach without checking if i is reachable |
Pitfall 2: Wrong early return condition
1 | # ❌ Wrong: Check max_reach == n-1 |
Variant Extension:
Variant 1: Jump Game II (Minimum Jumps)
LeetCode 45: Given array nums,
guaranteed to reach last position, return minimum number of jumps to
reach last position.
Approach: Use greedy + two pointers
1 | def jump(self, nums: List[int]) -> int: |
Summary
Core Points
- Essence of greedy algorithms: Make locally optimal choice at each step, hoping local optimum leads to global optimum
- Challenge of greedy algorithms: Proving local optimum leads to global optimum (requires mathematical proof)
- Common greedy strategies:
- Sorting: Greedily select after sorting by some attribute
- Maintain optimal value: Dynamically update and maintain optimal value (like farthest distance)
- Local adjustment: Make optimal adjustment based on current state
Problem Checklist
Interval Scheduling:
- ✅ Non-overlapping Intervals (sort by end time)
- ✅ Minimum Number of Arrows to Burst Balloons (interval overlap)
- Meeting Rooms (interval scheduling)
Jump Game:
- ✅ Jump Game (determine if reachable)
- ✅ Jump Game II (minimum jumps)
Gas Station:
- ✅ Gas Station (circular array)
- Candy Distribution (local adjustment)
Other Classic Problems:
- Best Time to Buy and Sell Stock II (multiple transactions)
- Assign Cookies (sorting greedy)
- Wiggle Subsequence (local adjustment)
Common Mistakes
- Wrong greedy strategy: Not correctly choosing greedy criterion (like sorting by start time instead of end time)
- Forgetting to prove correctness: Greedy algorithms need proof that local optimum leads to global optimum
- Boundary condition handling: Not handling empty array, single element, etc.
- Early return condition: Not returning early when condition met, reducing efficiency
Learning Suggestions
- Understand applicable scenarios: Not all problems can use greedy, must satisfy greedy choice property and optimal substructure
- Learn to prove correctness: Use proof by contradiction, exchange argument, etc. to prove greedy strategy correctness
- Compare with DP: Some problems can use either greedy or DP, understand differences and applicable scenarios
- Accumulate common patterns: Interval scheduling, jump problems, allocation problems all have fixed greedy patterns
I hope this article helps you gain a deep understanding of greedy algorithms. Keep practicing to gradually build greedy algorithm intuition and problem-solving framework! 🎯
- Post title:LeetCode (9): Greedy Algorithms
- Post author:Chen Kai
- Create time:2023-08-22 00:00:00
- Post link:https://www.chenk.top/leetcode-9-greedy-algorithms/
- Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.