Eigenvalues and eigenvectors are among the most profound and practical concepts in linear algebra. When we apply a matrix transformation to a vector, most vectors get both "rotated" and "stretched." But there's a special class of vectors that, after transformation, are only scaled — their direction remains completely unchanged. These are eigenvectors. Understanding them is understanding the "essence" of matrix transformations.
A Story from Everyday Life
Imagine you're in the kitchen kneading dough. Dough is a soft 3D object, and each time you knead, it deforms. Most "points" inside the dough move to completely different positions — they get both stretched and rotated.
But look carefully, and you'll notice some special directions: for example, when you press with a rolling pin, the "fibers" along the rolling pin direction only get compressed (scaled), without rotation. These special directions are the "intrinsic directions" of the dough deformation.
In linear algebra, matrices describe exactly this "dough-kneading" style of linear transformation. Eigenvectors are those vectors that "don't change direction, only get scaled" after transformation. Eigenvalues are the scaling factors.
特征值与特征向量/fig1.png)
Definition of Eigenvalues and Eigenvectors
Formal Definition
For an
The word "eigen" comes from German, meaning "one's own" or "intrinsic." Eigenvectors are the "intrinsic" directions of the matrix.
Why Must the Vector Be Non-Zero?
The definition specifically emphasizes that
Because for any matrix
Intuitive Understanding: Finding "Stable Directions"
Imagine standing on a rotating amusement ride. As it spins, most directions on your body keep changing. But one direction is stable — the rotation axis! Along this direction, no matter how the ride turns, this direction always points "up."
Eigenvectors are the "rotation axes" of matrix transformations — among all possible directions, they're the only ones that remain stable.
Geometric Meaning of Eigenvalues
What Different Eigenvalues Mean
The eigenvalue
| Eigenvalue |
Geometric Meaning |
|---|---|
| Vector is stretched | |
| Vector is compressed | |
| Vector length unchanged | |
| Vector compressed to origin (singular case) | |
| Vector compressed and reversed | |
| Vector stretched and reversed | |
| Vector rotates in complex plane (no invariant direction in real space) |
特征值与特征向量/fig2.png)
A Concrete Example: Shear Transformation
Consider the shear matrix:
Most vectors change direction under this transformation. But
horizontal direction (along the
Finding eigenvalues:
The corresponding eigenvector:
This example shows: even matrices that look very "distorted" still have invariant directions.
Real-Life Case: Your Reflection in a Mirror
When you look in a mirror, what transformation happens to your image?
Assuming the mirror is the
What you see in the mirror: the vertical axis (your symmetry axis) doesn't move, while the front-back direction is "flipped."
Characteristic Polynomial and Characteristic Equation
Deriving the Characteristic Equation
Starting from the definition
This is a homogeneous linear system. For it to have a
non-zero solution
This is the characteristic equation.
Characteristic Polynomial
For an
Complete Calculation Example
Find the eigenvalues and eigenvectors of matrix
Step 1: Write the characteristic equation
Step 2: Find eigenvalues
For
Taking
Taking
Vieta's Formulas: Matrix Version
Coefficients of the characteristic polynomial have deep connections to the matrix:
Trace and sum of eigenvalues:
Determinant and product of eigenvalues:
In our example:
Diagonalization: Making Complex Transformations Simple
Core Idea of Diagonalization
Suppose an
Construct matrices: -
Then we have the diagonalization decomposition:
Why Is Diagonalization Useful?
Use 1: Fast computation of matrix powers
Imagine computing
When
This is crucial in dynamical systems, Markov chains, neural networks, and other fields.
Geometric Interpretation of Diagonalization
Diagonalization can be understood as a three-step transformation:
: Transform from original coordinates to eigenvector coordinates : In eigenvector coordinates, transformation is just simple scaling : Transform back from eigenvector coordinates to original coordinates
特征值与特征向量/fig3.png)
When Can We Diagonalize?
Theorem: Matrix
Sufficient conditions: - If
Non-diagonalizable example:
Complex Eigenvalues: The Mathematics of Rotation
Rotation Matrices Have No Real Eigenvalues
Consider the
Finding eigenvalues:
Solution:
Geometric intuition: In the real 2D plane, a
Eigenvalues of General Rotation Matrices
Rotation matrix for angle
Eigenvalues are:
This is exactly Euler's formula! The magnitude
Complex Eigenvalues Always Come in Pairs
For real matrices, if
Reason: Coefficients of the characteristic polynomial are all real, so complex roots must come in conjugate pairs.
Complex Eigenvalues and Oscillation
When a matrix has complex eigenvalues
Long-term system behavior: -
This explains why an undamped spring oscillates periodically, while a damped oscillator gradually stops.
特征值与特征向量/fig4.png)
Application: Population Growth Model
Leslie Matrix
In ecology, Leslie matrices model age-structured population
evolution. Suppose a species is divided into three age groups (juvenile,
adult, elderly), represented by the population vector:
Concrete Example
Suppose for an animal: - Juveniles don't reproduce (
1 | import numpy as np |
The dominant eigenvalue (largest in absolute value)
is approximately
Long-Term Behavior Analysis
Population vector after
As
Where
Key insight: - If
The dominant eigenvector
This model is widely used in wildlife conservation, fisheries management, and population policy.
Application: Google PageRank Algorithm
The Challenge of Web Ranking
In 1998, when Larry Page and Sergey Brin founded Google, they faced a core problem: how to measure a webpage's "importance"?
Their insight: a webpage's importance depends on how many important pages link to it. This is a recursive definition — requiring eigenvectors to solve!
Mathematical Modeling
Suppose the internet has
Each page's PageRank value
In matrix form:
This is exactly an eigenvalue problem! We want to find
Random Walk Interpretation
PageRank has an intuitive interpretation: imagine a "random surfer" clicking links aimlessly between webpages. At each page, they click any outlink with equal probability.
In the long run, the probability distribution of which pages this person visits is exactly the PageRank vector!
To ensure convergence, Google introduced a "damping factor"
Simplified Example
Consider a tiny internet with only 4 webpages:
1 | Page 1 → Pages 2, 3 |
The hyperlink matrix (with damping factor
1 | import numpy as np |
特征值与特征向量/fig5.png)
Power Iteration Method
Google uses power iteration to compute
PageRank:
Fibonacci Sequence and Golden Ratio
Representing Recursion with Matrices
The Fibonacci sequence is defined as:
Using vector
Deriving the Closed-Form Formula via Diagonalization
Eigenvalues of matrix
Let
Through diagonalization we get Binet's Formula for
Fibonacci:
Asymptotic Behavior
Since
Special Properties of Symmetric Matrices
Spectral Theorem
Theorem (Spectral theorem for real symmetric
matrices): Let
- All eigenvalues of
are real numbers 2. can be orthogonally diagonalized: , where is an orthogonal matrix - Eigenvectors corresponding to different eigenvalues are mutually orthogonal
Significance of Orthogonal Diagonalization
Ordinary diagonalization is
The inverse of orthogonal matrix
Spectral Decomposition
Every real symmetric matrix can be written as a sum of
rank-one matrices:
Where
This decomposition has important applications in principal component analysis (PCA), image compression, and recommendation systems.
Positive Definiteness and Eigenvalues
For symmetric matrix
-
Positive definite matrices play a central role in optimization and machine learning (positive definite Hessian matrix means the function has a local minimum).
Summary of Eigenvalue Properties
Basic Properties
| Property | Description |
|---|---|
| Trace | |
| Determinant | |
| Invertibility | |
| Eigenvalues of |
|
| Eigenvalues of |
|
| Eigenvalues of |
|
| Eigenvalues of |
Similar Matrices
If
Algebraic and Geometric Multiplicities
- Algebraic multiplicity: multiplicity of eigenvalue as root of characteristic polynomial
- Geometric multiplicity: dimension of corresponding eigenspace (number of linearly independent eigenvectors)
Relationship: Geometric multiplicity
When geometric multiplicity equals algebraic multiplicity for all eigenvalues, the matrix is diagonalizable.
Numerical Computation Methods
Power Iteration Method
Finding the largest eigenvalue and its eigenvector:
1 | def power_iteration(A, num_iterations=100): |
Convergence rate depends on
Inverse Power Iteration
Finding the smallest eigenvalue: Apply power
iteration to
QR Algorithm
The gold standard algorithm for finding all eigenvalues:
1 | def qr_algorithm(A, num_iterations=100): |
This is the core algorithm used by NumPy, MATLAB, and other software for computing eigenvalues.
Exercises
Conceptual Understanding
1. Why does the definition of eigenvectors require non-zero vectors? What would happen if zero vectors were allowed?
2. How many linearly independent eigenvectors can
a
3. If
4. Explain why rotation matrices (other than
5. What kind of matrices are always diagonalizable? What kind might not be?
Computation Problems
6. Find the eigenvalues and eigenvectors of
7. Diagonalize matrix
8. Using the diagonalization from the previous
problem, compute
9. Find the eigenvalues (in complex form) of the
rotation matrix
10. Prove that matrix
11. Let
12. Find the eigenvalues of matrix
Proof Problems
13. Prove: If
14. Prove: Eigenvectors corresponding to different eigenvalues are linearly independent.
15. Let
16. Prove Vieta's formulas for matrices:
Application Problems
17. (Population model) A city's population is
divided into three age groups: children (0-14), adults (15-64), elderly
(65+). The Leslie matrix is:
18. (PageRank) Consider a small network with 3 webpages: - Page A links to B and C - Page B links to C - Page C links to A
Write the hyperlink matrix
19. (Generalized Fibonacci) Define a generalized
Fibonacci sequence:
20. (Markov chain) A system has two states A and B,
with transition probabilities: A to A is 0.7, A to B is 0.3; B to A is
0.4, B to B is 0.6. (a) Write the transition matrix
Programming Problems
21. Implement power iteration method to find the largest eigenvalue and corresponding eigenvector of a matrix. Test your implementation.
22. Write a program to visualize how a
23. Implement a simplified PageRank algorithm and test on a small network (5-10 nodes).
24. Write a program to simulate the Leslie population model, visualizing population evolution from different initial conditions.
25. Implement the QR algorithm to compute all eigenvalues of a matrix, comparing with NumPy's results.
Exercise Solutions
Conceptual Understanding Solutions
1. Why must eigenvectors be non-zero?
Solution: If we allowed the zero vector
Eigenvectors represent "special directions" of the transformation. The zero vector has no direction, so it cannot represent a meaningful invariant direction.
2. Can a matrix have no (real) eigenvalues?
Solution: Yes, over the real
numbers. For example, rotation matrices:
For
However, over the complex numbers, every
3. Is an eigenvector unique?
Solution: No. If
4. Geometric meaning of
Solution:
- Matrix
is singular (non-invertible) - has at least one zero eigenvalue - The transformation "crushes" space to lower dimensions
- Some direction is mapped to the zero vector
Since
5. Can different eigenvalues share the same eigenvector?
Solution: No (for the same matrix). Here's why:
Suppose
Subtracting:
Since
However, the same eigenvector direction can appear in different matrices with different eigenvalues.
Computation Problems Solutions
6. Find eigenvalues and eigenvectors of
Solution:
Characteristic equation:
Solving:
7. Compute
Solution: From problem 6, we have:
8. Find eigenvalues of rotation matrix
Solution:
Characteristic equation:
Using quadratic formula:
These are complex conjugates on the unit circle!
Geometrically, rotation has no invariant real directions (except for
9. Diagonalize the Fibonacci matrix
Solution:
Characteristic equation:
Eigenvalues (golden ratio and its conjugate):
For
For
Diagonalization:
10. Given
Solution:
For
Summary: -
Proof Problems Solutions
11. Prove: If
Proof: Let
Multiply both sides by
Rearranging:
Therefore,
Note: This requires
12. Prove: Eigenvectors corresponding to distinct eigenvalues are linearly independent
Proof: We'll use induction. Base case (
Inductive step: Assume eigenvectors
Suppose:
Apply
Multiply (*) by
Subtract (*) from ():
By inductive hypothesis,
Since
13. Prove:
Proof:
The characteristic polynomial is:
Its roots are the eigenvalues
For the determinant: Set
For the trace: The coefficient of
From the characteristic polynomial
Therefore:
14. Prove: A matrix and its transpose have the same eigenvalues
Proof: The characteristic polynomial of
Using the property
Therefore,
Note: The eigenvectors are generally different!
15. Prove: Similar matrices have the same eigenvalues
Proof: Let
Since
Same characteristic polynomial means same eigenvalues.
16. Verify Vieta's formulas for
Solution:
Eigenvalues:
Solving:
Verify trace:
Verify determinant:
Application Problems Solutions
17. Leslie population model (see problem statement)
Solution:
(a) Finding dominant eigenvalue:
Characteristic equation:
Solving:
Eigenvalues:
Dominant eigenvalue:
Long-term trend: Population grows at approximately
3.9% per year (since
(b) Stable age distribution:
Solve
Interpretation: In the long run, the population will stabilize to: - 58% children (0-14) - 31% adults (15-64) - 11% elderly (65+)
18. PageRank simplified example
Solution:
Network structure: - A → B, C (A links to 2 pages, so weight 0.5 each) - B → C (B links to 1 page, weight 1) - C → A (C links to 1 page, weight 1)
Hyperlink matrix:
Power iteration (starting from
After 20 iterations:
Ranking: C (42.9%) > A (36.9%) > B (20.2%)
Page C has the highest PageRank because it receives links from both A and B!
19. Generalized Fibonacci:
Solution:
(a) Matrix representation:
Matrix:
(b) Eigenvalues:
Characteristic equation:
(c) For
These are exactly the golden ratio
20. Markov chain steady state
Solution:
(a) Transition matrix:
(Columns sum to 1: column
(b) Eigenvalues:
Characteristic equation:
Eigenvalues:
For
This gives
(c) Steady-state distribution (normalized):
Interpretation: In the long run, the system spends 57.1% of time in state A and 42.9% in state B.
Programming Problems Solutions
21. Power iteration implementation
Solution:
1 | import numpy as np |
Output: 1
2
3
4
5
6
7
8Converged in 27 iterations
Power iteration result:
Eigenvalue: 5.000000
Eigenvector: [0.89442719 0.4472136 ]
NumPy result:
Eigenvalue: 5.000000
Eigenvector: [0.89442719 0.4472136 ]
22-25. (Full implementations available in the Chinese version, including visualization of eigenvectors, PageRank algorithm, Leslie model simulation, and QR algorithm)
Summary
Eigenvalues and eigenvectors reveal the "internal structure" of linear transformations:
- Eigenvectors are special directions that "don't change direction" under transformation
- Eigenvalues describe the scaling along these directions
- Diagonalization lets us understand matrices in the "most natural" coordinate system
- Complex eigenvalues correspond to rotation, explaining oscillation and periodic behavior
- Dominant eigenvalue determines the system's long-term behavior
From Google search to population forecasting, from quantum mechanics to machine learning, eigenvalues are everywhere. Master this concept, and you hold the key to understanding complex systems.
Core intuition: Find those special vectors whose "direction doesn't change," and complex transformations become simple scaling.
References
- Strang, G. (2019). Introduction to Linear Algebra. Chapter 6.
- Axler, S. (2015). Linear Algebra Done Right. Chapters 5-7.
- 3Blue1Brown. Essence of Linear Algebra, Chapters 13-14.
- Page, L., Brin, S., et al. (1999). The PageRank Citation Ranking: Bringing Order to the Web. Stanford Technical Report.
- Caswell, H. (2001). Matrix Population Models. Sinauer Associates.
This is Chapter 6 of the 18-part "Essence of Linear Algebra" series.
- Post title:Essence of Linear Algebra (6): Eigenvalues and Eigenvectors
- Post author:Chen Kai
- Create time:2019-02-01 09:15:00
- Post link:https://www.chenk.top/chapter-06-eigenvalues-and-eigenvectors/
- Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.