Essence of Linear Algebra (6): Eigenvalues and Eigenvectors
Chen Kai BOSS

Eigenvalues and eigenvectors are among the most profound and practical concepts in linear algebra. When we apply a matrix transformation to a vector, most vectors get both "rotated" and "stretched." But there's a special class of vectors that, after transformation, are only scaled — their direction remains completely unchanged. These are eigenvectors. Understanding them is understanding the "essence" of matrix transformations.

A Story from Everyday Life

Imagine you're in the kitchen kneading dough. Dough is a soft 3D object, and each time you knead, it deforms. Most "points" inside the dough move to completely different positions — they get both stretched and rotated.

But look carefully, and you'll notice some special directions: for example, when you press with a rolling pin, the "fibers" along the rolling pin direction only get compressed (scaled), without rotation. These special directions are the "intrinsic directions" of the dough deformation.

In linear algebra, matrices describe exactly this "dough-kneading" style of linear transformation. Eigenvectors are those vectors that "don't change direction, only get scaled" after transformation. Eigenvalues are the scaling factors.

Definition of Eigenvalues and Eigenvectors

Formal Definition

For an square matrix, if there exists a non-zero vectorand a scalarsuch that:Then: -is called an eigenvector of matrix -is called the corresponding eigenvalue

The word "eigen" comes from German, meaning "one's own" or "intrinsic." Eigenvectors are the "intrinsic" directions of the matrix.

Why Must the Vector Be Non-Zero?

The definition specifically emphasizes thatmust be non-zero. Why?

Because for any matrixand any scalar, the zero vectorsatisfies. If we allowed zero vectors as eigenvectors, every number would be an eigenvalue — meaningless.

Intuitive Understanding: Finding "Stable Directions"

Imagine standing on a rotating amusement ride. As it spins, most directions on your body keep changing. But one direction is stable — the rotation axis! Along this direction, no matter how the ride turns, this direction always points "up."

Eigenvectors are the "rotation axes" of matrix transformations — among all possible directions, they're the only ones that remain stable.

Geometric Meaning of Eigenvalues

What Different Eigenvalues Mean

The eigenvaluedescribes how much the eigenvector is "scaled":

Eigenvalue Geometric Meaning
Vector is stretched
Vector is compressed
Vector length unchanged
Vector compressed to origin (singular case)
Vector compressed and reversed
Vector stretched and reversed
complex Vector rotates in complex plane (no invariant direction in real space)

A Concrete Example: Shear Transformation

Consider the shear matrix:This matrix transforms a square into a parallelogram (imagine pushing a stack of cards at an angle).

Most vectors change direction under this transformation. But horizontal direction (along the-axis) vectors are exceptions — they only get "sheared," direction unchanged!

Finding eigenvalues:, giving (repeated root).

The corresponding eigenvector:(horizontal direction).

This example shows: even matrices that look very "distorted" still have invariant directions.

Real-Life Case: Your Reflection in a Mirror

When you look in a mirror, what transformation happens to your image?

Assuming the mirror is the-axis (vertically placed mirror), the reflection transformation is:Eigenvalues and eigenvectors: -, (horizontal direction: left-right flipped, length unchanged) -, (vertical direction: completely unchanged)

What you see in the mirror: the vertical axis (your symmetry axis) doesn't move, while the front-back direction is "flipped."

Characteristic Polynomial and Characteristic Equation

Deriving the Characteristic Equation

Starting from the definition , we can rewrite as:

This is a homogeneous linear system. For it to have a non-zero solution , the matrix must be singular (determinant zero):

This is the characteristic equation.

Characteristic Polynomial

is a polynomial in , called the characteristic polynomial.

For anmatrix, the characteristic polynomial is degree, of the form:By the fundamental theorem of algebra, it has exactlyroots in the complex numbers (counting multiplicity)— these are theeigenvalues.

Complete Calculation Example

Find the eigenvalues and eigenvectors of matrix.

Step 1: Write the characteristic equation

Step 2: Find eigenvalues, Step 3: Find eigenvectors

For:First row gives:, i.e.,.

Taking, we getFor:First row gives:, i.e.,.

Taking, we get Verification:

Vieta's Formulas: Matrix Version

Coefficients of the characteristic polynomial have deep connections to the matrix:

Trace and sum of eigenvalues:

Determinant and product of eigenvalues:

In our example: , . This is a good way to verify!

Diagonalization: Making Complex Transformations Simple

Core Idea of Diagonalization

Suppose anmatrixhaslinearly independent eigenvectors, with corresponding eigenvalues.

Construct matrices: -(eigenvectors as columns) -(eigenvalues on diagonal)

Then we have the diagonalization decomposition:Or equivalently:

Why Is Diagonalization Useful?

Use 1: Fast computation of matrix powersAndis very easy to compute:

Imagine computing : direct multiplication 100 times is slow, but after diagonalization, you only need to compute a few numbers Use 2: Understanding long-term behavior of matrices

When, the behavior ofis completely determined by the largest eigenvalue: - If: system diverges (explosive growth) - If: system converges to zero - If: system is stable or periodic

This is crucial in dynamical systems, Markov chains, neural networks, and other fields.

Geometric Interpretation of Diagonalization

Diagonalization can be understood as a three-step transformation:

  1. : Transform from original coordinates to eigenvector coordinates
  2. : In eigenvector coordinates, transformation is just simple scaling
  3. : Transform back from eigenvector coordinates to original coordinates

When Can We Diagonalize?

Theorem: Matrixis diagonalizable if and only ifhaslinearly independent eigenvectors.

Sufficient conditions: - Ifhasdistinct eigenvalues, it's definitely diagonalizable - Real symmetric matrices are always diagonalizable (stronger: orthogonally diagonalizable)

Non-diagonalizable example:Eigenvalue(repeated root), but only one linearly independent eigenvector. Such matrices are called defective matrices.

Complex Eigenvalues: The Mathematics of Rotation

Rotation Matrices Have No Real Eigenvalues

Consider the rotation matrix:

Finding eigenvalues:

Solution: (pure imaginary)!

Geometric intuition: In the real 2D plane, arotation has no "direction-invariant" vector — every vector gets rotated. So real eigenvalues don't exist, and eigenvalues "escape" to the complex domain.

Eigenvalues of General Rotation Matrices

Rotation matrix for angle :

Eigenvalues are:

This is exactly Euler's formula! The magnitude indicates pure rotation doesn't change vector length.

Complex Eigenvalues Always Come in Pairs

For real matrices, ifis an eigenvalue, then its conjugatemust also be an eigenvalue.

Reason: Coefficients of the characteristic polynomial are all real, so complex roots must come in conjugate pairs.

Complex Eigenvalues and Oscillation

When a matrix has complex eigenvalues: -: scaling factor per iteration -: rotation angle per iteration

Long-term system behavior: -: spiral convergence to origin -: periodic oscillation (elliptical orbit) -: spiral divergence

This explains why an undamped spring oscillates periodically, while a damped oscillator gradually stops.

Application: Population Growth Model

Leslie Matrix

In ecology, Leslie matrices model age-structured population evolution. Suppose a species is divided into three age groups (juvenile, adult, elderly), represented by the population vector:Population after one year is determined by the Leslie matrix:Where: -: average fertility rate of age group -: survival probability from age groupto the next

Concrete Example

Suppose for an animal: - Juveniles don't reproduce (), adults have fertility 2 (), elderly have fertility 0.5 () - Juvenile to adult survival rate 0.6 (), adult to elderly survival rate 0.8 ()Using Python to compute eigenvalues:

1
2
3
4
5
6
7
8
9
import numpy as np

L = np.array([[0, 2, 0.5],
[0.6, 0, 0],
[0, 0.8, 0]])

eigenvalues, eigenvectors = np.linalg.eig(L)
print("Eigenvalues:", eigenvalues)
print("Dominant eigenvalue:", max(eigenvalues.real))

The dominant eigenvalue (largest in absolute value) is approximately.

Long-Term Behavior Analysis

Population vector after years:

As :

Where is the dominant eigenvector.

Key insight: - If: population grows - If: population stable - If: population goes extinct

The dominant eigenvectorgives the stable age distribution— regardless of initial state, population eventually tends toward this distribution ratio.

This model is widely used in wildlife conservation, fisheries management, and population policy.

Application: Google PageRank Algorithm

The Challenge of Web Ranking

In 1998, when Larry Page and Sergey Brin founded Google, they faced a core problem: how to measure a webpage's "importance"?

Their insight: a webpage's importance depends on how many important pages link to it. This is a recursive definition — requiring eigenvectors to solve!

Mathematical Modeling

Suppose the internet haswebpages. Define the hyperlink matrix:Whereis the number of outlinks from page.

Each page's PageRank value satisfies:

In matrix form:

This is exactly an eigenvalue problem! We want to find 's eigenvector corresponding to eigenvalue .

Random Walk Interpretation

PageRank has an intuitive interpretation: imagine a "random surfer" clicking links aimlessly between webpages. At each page, they click any outlink with equal probability.

In the long run, the probability distribution of which pages this person visits is exactly the PageRank vector!

To ensure convergence, Google introduced a "damping factor"(typically 0.85):This means the surfer has a 15% chance of randomly jumping to any page (simulating directly typing URLs).

Simplified Example

Consider a tiny internet with only 4 webpages:

1
2
3
4
Page 1 → Pages 2, 3
Page 2 → Page 3
Page 3 → Page 1
Page 4 → Pages 1, 2, 3

The hyperlink matrix (with damping factor):Using power iteration to find the dominant eigenvector gives each page's PageRank score. Page 3, being linked by multiple pages (and the pages linking to it, like Page 1, are themselves important), will have high PageRank.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import numpy as np

# Construct hyperlink matrix
H = np.array([[0, 0, 1, 1/3],
[1/2, 0, 0, 1/3],
[1/2, 1, 0, 1/3],
[0, 0, 0, 0]])

d = 0.85
n = 4
G = d * H + (1 - d) / n * np.ones((n, n))

# Power iteration
r = np.ones(n) / n
for _ in range(100):
r = G @ r

print("PageRank:", r)

Power Iteration Method

Google uses power iteration to compute PageRank:Iterating continuously, the vector converges to the dominant eigenvector. This method is particularly suitable for sparse matrices (the internet's link matrix is very sparse) and can efficiently handle billions of webpages.

Fibonacci Sequence and Golden Ratio

Representing Recursion with Matrices

The Fibonacci sequence is defined as:.

Using vectorto represent adjacent terms:Therefore:

Deriving the Closed-Form Formula via Diagonalization

Eigenvalues of matrix :

Let (golden ratio), .

Through diagonalization we get Binet's Formula for Fibonacci:

Asymptotic Behavior

Since, whenis large,, so:This explains why the ratio of adjacent Fibonacci numbers approaches the golden ratio:The golden ratio appearing here is no coincidence — it's exactly the dominant eigenvalue of the Fibonacci matrix!

Special Properties of Symmetric Matrices

Spectral Theorem

Theorem (Spectral theorem for real symmetric matrices): Letbe anreal symmetric matrix (), then:

  1. All eigenvalues ofare real numbers 2.can be orthogonally diagonalized:, whereis an orthogonal matrix
  2. Eigenvectors corresponding to different eigenvalues are mutually orthogonal

Significance of Orthogonal Diagonalization

Ordinary diagonalization is, but symmetric matrix diagonalization is more elegant:.

The inverse of orthogonal matrixis just its transpose (), which is very stable for numerical computation.

Spectral Decomposition

Every real symmetric matrix can be written as a sum of rank-one matrices:

Where are unit orthonormal eigenvectors.

This decomposition has important applications in principal component analysis (PCA), image compression, and recommendation systems.

Positive Definiteness and Eigenvalues

For symmetric matrix:

-is positive definiteall eigenvalues -is positive semi-definiteall eigenvalues -is negative definiteall eigenvalues -is indefiniteeigenvalues have both positive and negative

Positive definite matrices play a central role in optimization and machine learning (positive definite Hessian matrix means the function has a local minimum).

Summary of Eigenvalue Properties

Basic Properties

Property Description
Trace
Determinant
Invertibility invertibleall
Eigenvalues of (eigenvectors unchanged)
Eigenvalues of (eigenvectors unchanged)
Eigenvalues of (eigenvectors unchanged)
Eigenvalues of (eigenvectors unchanged)

Similar Matrices

If(andare similar), then: -andhave the same eigenvalues - Ifis an eigenvector of, thenis an eigenvector ofDiagonalization is essentially finding a similarity transformation that turnsinto diagonal matrix.

Algebraic and Geometric Multiplicities

  • Algebraic multiplicity: multiplicity of eigenvalue as root of characteristic polynomial
  • Geometric multiplicity: dimension of corresponding eigenspace (number of linearly independent eigenvectors)

Relationship: Geometric multiplicityAlgebraic multiplicity

When geometric multiplicity equals algebraic multiplicity for all eigenvalues, the matrix is diagonalizable.

Numerical Computation Methods

Power Iteration Method

Finding the largest eigenvalue and its eigenvector:

1
2
3
4
5
6
7
8
9
10
11
def power_iteration(A, num_iterations=100):
n = A.shape[0]
v = np.random.rand(n)
v = v / np.linalg.norm(v)

for _ in range(num_iterations):
Av = A @ v
v = Av / np.linalg.norm(Av)

eigenvalue = v @ A @ v # Rayleigh quotient
return eigenvalue, v

Convergence rate depends on: the larger the ratio of largest to second-largest eigenvalue, the faster the convergence.

Inverse Power Iteration

Finding the smallest eigenvalue: Apply power iteration to(the largest eigenvalue ofis the reciprocal of's smallest eigenvalue).

QR Algorithm

The gold standard algorithm for finding all eigenvalues:

1
2
3
4
5
6
def qr_algorithm(A, num_iterations=100):
Ak = A.copy()
for _ in range(num_iterations):
Q, R = np.linalg.qr(Ak)
Ak = R @ Q
return np.diag(Ak) # Diagonal elements converge to eigenvalues

This is the core algorithm used by NumPy, MATLAB, and other software for computing eigenvalues.

Exercises

Conceptual Understanding

1. Why does the definition of eigenvectors require non-zero vectors? What would happen if zero vectors were allowed?

2. How many linearly independent eigenvectors can amatrix have at most? At minimum?

3. Ifis an eigenvalue of matrix, andis a constant, what are the eigenvalues of? Of?

4. Explain why rotation matrices (other thanor) have no eigenvalues in the real numbers.

5. What kind of matrices are always diagonalizable? What kind might not be?

Computation Problems

6. Find the eigenvalues and eigenvectors ofand verify the results.

7. Diagonalize matrix, writing it in the form.

8. Using the diagonalization from the previous problem, compute.

9. Find the eigenvalues (in complex form) of the rotation matrix.

10. Prove that matrixcannot be diagonalized.

11. Letbe anmatrix,an eigenvector ofwith eigenvalue. Proveis also an eigenvector of, and find the corresponding eigenvalue.

12. Find the eigenvalues of matrix(Hint: this is the companion matrix of the recurrence).

Proof Problems

13. Prove: Ifis a real symmetric matrix, all its eigenvalues are real.

14. Prove: Eigenvectors corresponding to different eigenvalues are linearly independent.

15. Letandbe similar matrices (). Proveandhave the same characteristic polynomial, hence the same eigenvalues.

16. Prove Vieta's formulas for matrices:,.

Application Problems

17. (Population model) A city's population is divided into three age groups: children (0-14), adults (15-64), elderly (65+). The Leslie matrix is: (a) Find the dominant eigenvalue and determine the long-term population trend. (b) Find the stable age distribution (normalized dominant eigenvector).

18. (PageRank) Consider a small network with 3 webpages: - Page A links to B and C - Page B links to C - Page C links to A

Write the hyperlink matrixand use power iteration to compute each page's PageRank (damping factor).

19. (Generalized Fibonacci) Define a generalized Fibonacci sequence:,,. (a) Write the corresponding matrix. (b) Find the eigenvalues of. (c) When, verify the eigenvalues are the golden ratio and its conjugate.

20. (Markov chain) A system has two states A and B, with transition probabilities: A to A is 0.7, A to B is 0.3; B to A is 0.4, B to B is 0.6. (a) Write the transition matrix. (b) Find the eigenvalues and eigenvectors of. (c) Find the steady-state distribution (eigenvector corresponding to).

Programming Problems

21. Implement power iteration method to find the largest eigenvalue and corresponding eigenvector of a matrix. Test your implementation.

22. Write a program to visualize how amatrix transforms points on the unit circle, marking the eigenvector directions.

23. Implement a simplified PageRank algorithm and test on a small network (5-10 nodes).

24. Write a program to simulate the Leslie population model, visualizing population evolution from different initial conditions.

25. Implement the QR algorithm to compute all eigenvalues of a matrix, comparing with NumPy's results.

Exercise Solutions

Conceptual Understanding Solutions

1. Why must eigenvectors be non-zero?

Solution: If we allowed the zero vectoras an eigenvector, then for any matrixand any scalar, we'd have. This means every number would be an eigenvalue, rendering the concept meaningless.

Eigenvectors represent "special directions" of the transformation. The zero vector has no direction, so it cannot represent a meaningful invariant direction.

2. Can a matrix have no (real) eigenvalues?

Solution: Yes, over the real numbers. For example, rotation matrices:

For , this matrix has no real eigenvalues (only complex conjugate pairs), because rotation doesn't leave any real direction unchanged.

However, over the complex numbers, everymatrix has exactlyeigenvalues (counting multiplicities) by the fundamental theorem of algebra.

3. Is an eigenvector unique?

Solution: No. Ifis an eigenvector, then any non-zero scalar multiple (where) is also an eigenvector:Geometrically, eigenvectors define a direction (a line through the origin), not a specific vector. Any vector along that line is an eigenvector.

4. Geometric meaning of

Solution:means:

  • Matrixis singular (non-invertible) -has at least one zero eigenvalue
  • The transformation "crushes" space to lower dimensions
  • Some direction is mapped to the zero vector

Since, if any, then.

5. Can different eigenvalues share the same eigenvector?

Solution: No (for the same matrix). Here's why:

Suppose is an eigenvector for both and :

Subtracting:

Since (eigenvectors are non-zero), we must have .

However, the same eigenvector direction can appear in different matrices with different eigenvalues.

Computation Problems Solutions

6. Find eigenvalues and eigenvectors of

Solution:

Characteristic equation:

Solving: , so For:This gives, soFor:This gives, so

7. Computewhereusing diagonalization

Solution: From problem 6, we have:Verify:Then:

8. Find eigenvalues of rotation matrix

Solution:

Characteristic equation:

Using quadratic formula:

These are complex conjugates on the unit circle! Geometrically, rotation has no invariant real directions (except for ).

9. Diagonalize the Fibonacci matrix

Solution:

Characteristic equation:

Eigenvalues (golden ratio and its conjugate):

For :

For :

Diagonalization: where

10. Givenand, find eigenvalues of,,

Solution:

For (with eigenvalue 5): - - (since) -For (with eigenvalue -2): - - -

Summary: -has eigenvalues 25, 4 -has eigenvalues 1/5, -1/2 -has eigenvalues 8, 1

Proof Problems Solutions

11. Prove: Ifis invertible,has eigenvalues

Proof: Let be an eigenvalue of with eigenvector :

Multiply both sides by :

Rearranging:

Therefore, is an eigenvector of with eigenvalue .

Note: This requires. Ifis invertible,, so none of its eigenvalues are zero.

12. Prove: Eigenvectors corresponding to distinct eigenvalues are linearly independent

Proof: We'll use induction. Base case (): A single non-zero eigenvector is trivially linearly independent.

Inductive step: Assume eigenvectors with distinct eigenvalues are linearly independent. Now consider with eigenvalue for all .

Suppose:

Apply :

Multiply (*) by :

Subtract (*) from ():

By inductive hypothesis, are linearly independent, so:

Since , we have for all . From (*), this gives , so .

13. Prove: and

Proof:

The characteristic polynomial is:

Its roots are the eigenvalues , so:

For the determinant: Set :

For the trace: The coefficient of in the expanded form is:

From the characteristic polynomial , expanding the determinant shows the coefficient of is .

Therefore:

14. Prove: A matrix and its transpose have the same eigenvalues

Proof: The characteristic polynomial of is:

Using the property :

Therefore, and have the same characteristic polynomial, hence the same eigenvalues.

Note: The eigenvectors are generally different!

15. Prove: Similar matrices have the same eigenvalues

Proof: Let . The characteristic polynomial of is:

Since :

Same characteristic polynomial means same eigenvalues.

16. Verify Vieta's formulas for

Solution:

Eigenvalues:

Solving:

Verify trace:

Verify determinant:

Application Problems Solutions

17. Leslie population model (see problem statement)

Solution:

(a) Finding dominant eigenvalue:

Characteristic equation:

Solving:

Eigenvalues: , ,

Dominant eigenvalue: (largest absolute value)

Long-term trend: Population grows at approximately 3.9% per year (since)

(b) Stable age distribution:

Solveto find the eigenvector corresponding to:

Interpretation: In the long run, the population will stabilize to: - 58% children (0-14) - 31% adults (15-64) - 11% elderly (65+)

18. PageRank simplified example

Solution:

Network structure: - A → B, C (A links to 2 pages, so weight 0.5 each) - B → C (B links to 1 page, weight 1) - C → A (C links to 1 page, weight 1)

Hyperlink matrix:With damping factor:

Power iteration (starting from):

After 20 iterations:

Ranking: C (42.9%) > A (36.9%) > B (20.2%)

Page C has the highest PageRank because it receives links from both A and B!

19. Generalized Fibonacci:

Solution:

(a) Matrix representation:

Matrix:

(b) Eigenvalues:

Characteristic equation:

(c) For (standard Fibonacci):

These are exactly the golden ratio and its conjugate !

20. Markov chain steady state

Solution:

(a) Transition matrix:

(Columns sum to 1: columnshows transitions FROM state)

(b) Eigenvalues:

Characteristic equation:

Eigenvalues: ,

For :

This gives , so

(c) Steady-state distribution (normalized):

Interpretation: In the long run, the system spends 57.1% of time in state A and 42.9% in state B.

Programming Problems Solutions

21. Power iteration implementation

Solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
import numpy as np

def power_iteration(A, num_iterations=100, tolerance=1e-10):
"""
Power iteration method to find largest eigenvalue and eigenvector

Parameters:
A: Square matrix
num_iterations: Maximum iterations
tolerance: Convergence tolerance

Returns:
eigenvalue: Dominant eigenvalue
eigenvector: Corresponding eigenvector (normalized)
"""
n = A.shape[0]

# Random initial vector (normalized)
v = np.random.rand(n)
v = v / np.linalg.norm(v)

for i in range(num_iterations):
# Matrix-vector multiplication
v_new = A @ v

# Compute eigenvalue (Rayleigh quotient)
eigenvalue = v @ (A @ v)

# Normalize
v_new = v_new / np.linalg.norm(v_new)

# Check convergence
if np.linalg.norm(v_new - v) < tolerance:
print(f"Converged in {i+1} iterations")
return eigenvalue, v_new

v = v_new

eigenvalue = v @ (A @ v)
return eigenvalue, v


# Test
A = np.array([[4, 2], [1, 3]])

# Our implementation
lambda_max, v_max = power_iteration(A, num_iterations=100)
print(f"Power iteration result:")
print(f" Eigenvalue: {lambda_max:.6f}")
print(f" Eigenvector: {v_max}")

# NumPy verification
eigenvalues, eigenvectors = np.linalg.eig(A)
max_idx = np.argmax(np.abs(eigenvalues))
print(f"\nNumPy result:")
print(f" Eigenvalue: {eigenvalues[max_idx]:.6f}")
print(f" Eigenvector: {eigenvectors[:, max_idx]}")

Output:

1
2
3
4
5
6
7
8
Converged in 27 iterations
Power iteration result:
Eigenvalue: 5.000000
Eigenvector: [0.89442719 0.4472136 ]

NumPy result:
Eigenvalue: 5.000000
Eigenvector: [0.89442719 0.4472136 ]

22-25. (Full implementations available in the Chinese version, including visualization of eigenvectors, PageRank algorithm, Leslie model simulation, and QR algorithm)

Summary

Eigenvalues and eigenvectors reveal the "internal structure" of linear transformations:

  • Eigenvectors are special directions that "don't change direction" under transformation
  • Eigenvalues describe the scaling along these directions
  • Diagonalization lets us understand matrices in the "most natural" coordinate system
  • Complex eigenvalues correspond to rotation, explaining oscillation and periodic behavior
  • Dominant eigenvalue determines the system's long-term behavior

From Google search to population forecasting, from quantum mechanics to machine learning, eigenvalues are everywhere. Master this concept, and you hold the key to understanding complex systems.

Core intuition: Find those special vectors whose "direction doesn't change," and complex transformations become simple scaling.

References

  1. Strang, G. (2019). Introduction to Linear Algebra. Chapter 6.
  2. Axler, S. (2015). Linear Algebra Done Right. Chapters 5-7.
  3. 3Blue1Brown. Essence of Linear Algebra, Chapters 13-14.
  4. Page, L., Brin, S., et al. (1999). The PageRank Citation Ranking: Bringing Order to the Web. Stanford Technical Report.
  5. Caswell, H. (2001). Matrix Population Models. Sinauer Associates.

This is Chapter 6 of the 18-part "Essence of Linear Algebra" series.

  • Post title:Essence of Linear Algebra (6): Eigenvalues and Eigenvectors
  • Post author:Chen Kai
  • Create time:2019-02-01 09:15:00
  • Post link:https://www.chenk.top/chapter-06-eigenvalues-and-eigenvectors/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments