Ordinary Differential Equations (11): Numerical Methods
Chen Kai BOSS

Many differential equations arising in practical problems cannot be solved by analytical methods — this is when numerical methods become our most powerful weapon. From Euler's simple idea to modern adaptive algorithms, numerical methods allow us to "approximately" solve virtually any differential equation. This chapter explores in depth the principles, implementations, and error analysis of various numerical methods.

Why Do We Need Numerical Methods?

Looking back at previous chapters, we learned many analytical methods: separation of variables, integrating factors, variation of parameters, Laplace transforms, and more. These methods are elegant, but have a fatal limitation —they only work for specific types of equations.

Consider this seemingly simple equation:

Try solving it with any method we've learned — you'll find nothing works! This equation has no solution in terms of elementary functions.

More generally, most differential equations encountered in engineering and science cannot be solved analytically:

  • Nonlinear equations (like the Navier-Stokes equations)
  • Variable coefficient equations
  • High-dimensional systems
  • Problems with complex boundary conditions

The core idea of numerical methods: Since we cannot obtain the exact solution, let's compute approximate valuesat discrete points.

Euler's Method: The Simplest Numerical Method

Basic Idea

Euler's method is the ancestor of all numerical methods, proposed by Leonhard Euler in 1768. Its idea is extremely simple: approximate a curve with its tangent line.

Consider the initial value problem:At point, the slope of the tangent to the solution curve is. Walking a small stepalong this tangent, we get:This is the Forward Euler formula.

Repeating this process:

Geometric Intuition

Imagine you're walking through a maze, but can only see a small patch of ground beneath your feet. Your strategy is:

  1. Check the slope of the terrain at your current position (derivative)
  2. Take a small step in that direction
  3. Repeat

This is the essence of Euler's method —local linearization.

Python Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
import numpy as np
import matplotlib.pyplot as plt

def euler_method(f, x0, y0, x_end, h):
"""
Forward Euler method

Parameters:
f: Function f(x, y) representing dy/dx = f(x, y)
x0, y0: Initial conditions
x_end: Integration endpoint
h: Step size

Returns:
x_values, y_values: Numerical solution
"""
x_values = [x0]
y_values = [y0]

x, y = x0, y0
while x < x_end:
y = y + h * f(x, y)
x = x + h
x_values.append(x)
y_values.append(y)

return np.array(x_values), np.array(y_values)

# Example: dy/dx = -2y, y(0) = 1
# Exact solution: y = exp(-2x)
f = lambda x, y: -2 * y
x0, y0 = 0, 1
x_end = 3

# Comparison of different step sizes
fig, axes = plt.subplots(1, 2, figsize=(14, 5))

# Left: Euler solutions with different step sizes
step_sizes = [0.5, 0.2, 0.1, 0.05]
x_exact = np.linspace(0, x_end, 100)
y_exact = np.exp(-2 * x_exact)

axes[0].plot(x_exact, y_exact, 'k-', linewidth=2, label='Exact solution')
for h in step_sizes:
x_num, y_num = euler_method(f, x0, y0, x_end, h)
axes[0].plot(x_num, y_num, 'o-', markersize=4, label=f'h = {h}')

axes[0].set_xlabel('x')
axes[0].set_ylabel('y')
axes[0].set_title('Euler Method: Comparison of Different Step Sizes')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

# Right: Error vs step size
errors = []
step_sizes_fine = np.logspace(-3, 0, 20)
for h in step_sizes_fine:
x_num, y_num = euler_method(f, x0, y0, x_end, h)
y_exact_at_end = np.exp(-2 * x_end)
error = abs(y_num[-1] - y_exact_at_end)
errors.append(error)

axes[1].loglog(step_sizes_fine, errors, 'bo-', label='Actual error')
axes[1].loglog(step_sizes_fine, step_sizes_fine, 'r--', label='O(h) reference line')
axes[1].set_xlabel('Step size h')
axes[1].set_ylabel('Error at endpoint')
axes[1].set_title('Order of Accuracy of Euler Method')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('euler_method.png', dpi=150, bbox_inches='tight')
plt.show()

Error Analysis

Euler method errors fall into two categories:

Local Truncation Error (LTE): Error introduced in a single step

Through Taylor expansion:The Euler formula gives, so:

Global Truncation Error (GTE): Accumulated error fromtoAftersteps:This means Euler's method is a first-order method: halving the step size halves the error.

Stability Analysis

Consider the test equation, where(decay problem).

The Euler formula gives:For the numerical solution not to diverge, we need, i.e.:Since, the stability condition is:This is called conditional stability— the step size must be small enough to ensure stability.

Backward Euler Method

Implicit Scheme

The Backward Euler method uses the derivative value at the next time step:Note thatappears on both sides of the equation — this is an implicit equation!

Why Do We Need Implicit Methods?

For stiff problems, explicit methods require extremely small step sizes for stability, while implicit methods can use large step sizes.

Characteristic of stiff problems: The system contains both fast and slow varying components. For example, in chemical reaction kinetics, some reactions complete in microseconds while others take hours.

Stability Analysis

For the test equation, Backward Euler gives: The amplification factor is. When, for anywe have.

This is called A-stable or unconditionally stable.

Python Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
def backward_euler(f, df_dy, x0, y0, x_end, h, tol=1e-10, max_iter=100):
"""
Backward Euler method (using Newton iteration to solve the implicit equation)

Parameters:
f: Function f(x, y)
df_dy: Partial derivative of f with respect to y
x0, y0: Initial conditions
x_end: Integration endpoint
h: Step size
tol: Newton iteration tolerance
max_iter: Maximum number of iterations
"""
x_values = [x0]
y_values = [y0]

x, y = x0, y0
while x < x_end:
x_new = x + h

# Newton iteration to solve y_new = y + h*f(x_new, y_new)
y_new = y # Initial guess
for _ in range(max_iter):
g = y_new - y - h * f(x_new, y_new)
dg = 1 - h * df_dy(x_new, y_new)
y_new_next = y_new - g / dg

if abs(y_new_next - y_new) < tol:
y_new = y_new_next
break
y_new = y_new_next

x, y = x_new, y_new
x_values.append(x)
y_values.append(y)

return np.array(x_values), np.array(y_values)

# Stiff problem example: y' = -1000(y - cos(x)) - sin(x), y(0) = 1
# This problem has a rapidly decaying transient to cos(x)
def f_stiff(x, y):
return -1000 * (y - np.cos(x)) - np.sin(x)

def df_dy_stiff(x, y):
return -1000

# Compare explicit and implicit methods
x0, y0 = 0, 1
x_end = 0.1

fig, axes = plt.subplots(1, 2, figsize=(14, 5))

# Forward Euler (may be unstable)
try:
h_forward = 0.001 # Small step size for stability
x_fwd, y_fwd = euler_method(f_stiff, x0, y0, x_end, h_forward)
axes[0].plot(x_fwd, y_fwd, 'b-', label=f'Forward Euler (h={h_forward})')
except:
pass

# Backward Euler (stable even with large step size)
h_back = 0.01
x_bwd, y_bwd = backward_euler(f_stiff, df_dy_stiff, x0, y0, x_end, h_back)
axes[0].plot(x_bwd, y_bwd, 'ro-', label=f'Backward Euler (h={h_back})')

# Exact solution
x_exact = np.linspace(x0, x_end, 200)
y_exact = np.cos(x_exact) + (y0 - 1) * np.exp(-1000 * x_exact)
axes[0].plot(x_exact, y_exact, 'k--', label='Exact solution')

axes[0].set_xlabel('x')
axes[0].set_ylabel('y')
axes[0].set_title('Stiff Problem: Explicit vs Implicit Methods')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

plt.tight_layout()
plt.show()

Improved Euler Method (Heun's Method)

Predictor-Corrector Idea

Euler's method uses the slope at the starting point, but the curve's slope changes. Can we use an average slope?

Heun's method (also called the improved Euler method):

  1. Predict: Use Euler's formula to get

  2. Correct: Use the average slope of starting and predicted points

Geometric Interpretation

Imagine you want to walk from point A to point B:

  • Euler's method: Walk in the direction at A the whole way
  • Heun's method: First probe in the direction at A to A', then take the average of directions at A and A'

Order of Accuracy

Heun's method is a second-order method:Halving the step size reduces the error to one quarter!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def heun_method(f, x0, y0, x_end, h):
"""
Heun's method (improved Euler method)
"""
x_values = [x0]
y_values = [y0]

x, y = x0, y0
while x < x_end:
k1 = f(x, y)
y_pred = y + h * k1
k2 = f(x + h, y_pred)

y = y + h * (k1 + k2) / 2
x = x + h

x_values.append(x)
y_values.append(y)

return np.array(x_values), np.array(y_values)

Runge-Kutta Methods

RK4: The Engineer's Swiss Army Knife

The Runge-Kutta family of methods is the workhorse of numerical ODE solving. The most famous is the fourth-order Runge-Kutta method (RK4):

Intuitive Explanation

RK4 samples 4 slopes over the interval:

-: Slope at left endpoint -: Slope at midpoint (predicted using) -: Slope at midpoint (predicted using, more accurate) -: Slope at right endpoint

Then weighted average:This is similar to the weights in Simpson's integration formula!

Python Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def rk4_method(f, x0, y0, x_end, h):
"""
Classical fourth-order Runge-Kutta method
"""
x_values = [x0]
y_values = [y0]

x, y = x0, y0
while x < x_end:
k1 = f(x, y)
k2 = f(x + h/2, y + h*k1/2)
k3 = f(x + h/2, y + h*k2/2)
k4 = f(x + h, y + h*k3)

y = y + h * (k1 + 2*k2 + 2*k3 + k4) / 6
x = x + h

x_values.append(x)
y_values.append(y)

return np.array(x_values), np.array(y_values)

# Compare Euler, Heun, and RK4
f = lambda x, y: np.sin(x * y) # Nonlinear equation
x0, y0 = 0, 1
x_end = 5
h = 0.2

# Reference solution (using very small step size)
from scipy.integrate import solve_ivp
sol = solve_ivp(lambda x, y: np.sin(x * y[0]), [x0, x_end], [y0],
dense_output=True, method='RK45', rtol=1e-10)

fig, axes = plt.subplots(1, 2, figsize=(14, 5))

x_ref = np.linspace(x0, x_end, 200)
y_ref = sol.sol(x_ref)[0]

# Left: Solutions from three methods
x_euler, y_euler = euler_method(f, x0, y0, x_end, h)
x_heun, y_heun = heun_method(f, x0, y0, x_end, h)
x_rk4, y_rk4 = rk4_method(f, x0, y0, x_end, h)

axes[0].plot(x_ref, y_ref, 'k-', linewidth=2, label='Reference solution')
axes[0].plot(x_euler, y_euler, 'b.-', label=f'Euler (h={h})')
axes[0].plot(x_heun, y_heun, 'g.-', label=f'Heun (h={h})')
axes[0].plot(x_rk4, y_rk4, 'r.-', label=f'RK4 (h={h})')
axes[0].set_xlabel('x')
axes[0].set_ylabel('y')
axes[0].set_title('Comparison of Three Methods')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

# Right: Comparison of error orders
step_sizes = np.logspace(-2.5, -0.5, 15)
euler_errors, heun_errors, rk4_errors = [], [], []

for h in step_sizes:
_, y_e = euler_method(f, x0, y0, x_end, h)
_, y_h = heun_method(f, x0, y0, x_end, h)
_, y_r = rk4_method(f, x0, y0, x_end, h)

y_true = sol.sol(x_end)[0]
euler_errors.append(abs(y_e[-1] - y_true))
heun_errors.append(abs(y_h[-1] - y_true))
rk4_errors.append(abs(y_r[-1] - y_true))

axes[1].loglog(step_sizes, euler_errors, 'b.-', label='Euler O(h)')
axes[1].loglog(step_sizes, heun_errors, 'g.-', label='Heun O(h ²)')
axes[1].loglog(step_sizes, rk4_errors, 'r.-', label='RK4 O(h ⁴)')
axes[1].loglog(step_sizes, step_sizes, 'b--', alpha=0.5)
axes[1].loglog(step_sizes, step_sizes**2, 'g--', alpha=0.5)
axes[1].loglog(step_sizes, step_sizes**4, 'r--', alpha=0.5)
axes[1].set_xlabel('Step size h')
axes[1].set_ylabel('Error')
axes[1].set_title('Comparison of Error Orders')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('rk_comparison.png', dpi=150, bbox_inches='tight')
plt.show()

Derivation of RK4

The general form of RK methods is:where:The parameters,,are determined by order conditions— requiring that the Taylor expansion of the numerical solution matches that of the true solution up to a certain order.

For RK4, there are 11 conditions to satisfy, but only 13 free parameters, so the solution is not unique. The classical RK4 is just one such choice.

Multistep Methods

Idea: Utilize Historical Information

Runge-Kutta methods are single-step methods— each step only uses. But we've already computed so many historical points; why not use them?

Multistep methods use multiple historical points to improve accuracy or efficiency.

Adams-Bashforth Methods (Explicit)

Two-step Adams-Bashforth (AB2):

Four-step Adams-Bashforth (AB4):

Adams-Moulton Methods (Implicit)

Two-step Adams-Moulton (AM2):This is implicit becausecontains the unknown.

Predictor-Corrector Methods

In practice, predictor-corrector combinations are often used:

  1. Use Adams-Bashforth to predict${n+1}{n+1} = f(x_{n+1}, {n+1})y{n+1}$
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    def adams_bashforth_moulton(f, x0, y0, x_end, h):
    """
    Adams-Bashforth-Moulton predictor-corrector method (4th order)
    Uses RK4 for startup
    """
    # Use RK4 to compute first 4 points
    x_values = [x0]
    y_values = [y0]
    f_values = [f(x0, y0)]

    x, y = x0, y0
    for _ in range(3):
    k1 = f(x, y)
    k2 = f(x + h/2, y + h*k1/2)
    k3 = f(x + h/2, y + h*k2/2)
    k4 = f(x + h, y + h*k3)

    y = y + h * (k1 + 2*k2 + 2*k3 + k4) / 6
    x = x + h

    x_values.append(x)
    y_values.append(y)
    f_values.append(f(x, y))

    # Adams-Bashforth-Moulton
    while x < x_end:
    # AB4 predict
    y_pred = y_values[-1] + h/24 * (
    55*f_values[-1] - 59*f_values[-2] +
    37*f_values[-3] - 9*f_values[-4]
    )

    x_new = x_values[-1] + h
    f_pred = f(x_new, y_pred)

    # AM4 correct
    y_corr = y_values[-1] + h/24 * (
    9*f_pred + 19*f_values[-1] -
    5*f_values[-2] + f_values[-3]
    )

    x_values.append(x_new)
    y_values.append(y_corr)
    f_values.append(f(x_new, y_corr))

    return np.array(x_values), np.array(y_values)

The Startup Problem

Multistep methods need multiple initial points, but the initial value problem only provides one. Solution:

  1. Use a single-step method (like RK4) to compute the first few points
  2. Then switch to the multistep method

Adaptive Step Size Control

The Problem: How to Choose Step Size?

  • Step size too large: Unacceptable error
  • Step size too small: Computation too slow

Adaptive step size idea: Let the algorithm adjust the step size based on local error.

Embedded Runge-Kutta Methods

Runge-Kutta-Fehlberg (RKF45) uses the same set ofvalues to simultaneously compute 4th and 5th order approximations: Error estimate:

Step Size Adjustment Strategy

Given tolerance, if, reduce step size and recompute; if, increase step size.

New step size estimate:In practice, a safety factor (like 0.9) and bounds are added.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
def rkf45_adaptive(f, x0, y0, x_end, tol=1e-6, h_init=0.1):
"""
Runge-Kutta-Fehlberg 4(5) adaptive step size method
"""
# Butcher tableau coefficients
a2, a3, a4, a5, a6 = 1/4, 3/8, 12/13, 1, 1/2

b21 = 1/4
b31, b32 = 3/32, 9/32
b41, b42, b43 = 1932/2197, -7200/2197, 7296/2197
b51, b52, b53, b54 = 439/216, -8, 3680/513, -845/4104
b61, b62, b63, b64, b65 = -8/27, 2, -3544/2565, 1859/4104, -11/40

# 4th and 5th order weights
c4 = [25/216, 0, 1408/2565, 2197/4104, -1/5, 0]
c5 = [16/135, 0, 6656/12825, 28561/56430, -9/50, 2/55]

x_values = [x0]
y_values = [y0]

x, y = x0, y0
h = h_init

while x < x_end:
if x + h > x_end:
h = x_end - x

# Compute k values
k1 = f(x, y)
k2 = f(x + a2*h, y + h*b21*k1)
k3 = f(x + a3*h, y + h*(b31*k1 + b32*k2))
k4 = f(x + a4*h, y + h*(b41*k1 + b42*k2 + b43*k3))
k5 = f(x + a5*h, y + h*(b51*k1 + b52*k2 + b53*k3 + b54*k4))
k6 = f(x + a6*h, y + h*(b61*k1 + b62*k2 + b63*k3 + b64*k4 + b65*k5))

ks = [k1, k2, k3, k4, k5, k6]

# 4th and 5th order approximations
y4 = y + h * sum(c4[i]*ks[i] for i in range(6))
y5 = y + h * sum(c5[i]*ks[i] for i in range(6))

# Error estimate
err = abs(y5 - y4)

if err < 1e-15:
err = 1e-15 # Avoid division by zero

# Step size adjustment
if err <= tol:
# Accept this step
x = x + h
y = y5 # Use higher-order result
x_values.append(x)
y_values.append(y)

# Compute new step size
h_new = 0.9 * h * (tol / err) ** 0.2
h = min(max(h_new, h/10), h*2) # Limit rate of change

return np.array(x_values), np.array(y_values)

Numerical Solution of Systems of Differential Equations

Vectorization

Higher-order equations and systems can all be written as first-order vector equations:All previous methods can be directly generalized.

Example: The Lorenz System

The Lorenz system is a classic example from chaos theory: With parameters,,, it produces the famous "butterfly effect."

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
from mpl_toolkits.mplot3d import Axes3D

def lorenz_system(state, t, sigma=10, rho=28, beta=8/3):
x, y, z = state
return np.array([
sigma * (y - x),
x * (rho - z) - y,
x * y - beta * z
])

def rk4_system(f, t0, y0, t_end, h, args=()):
"""
Vector version of RK4
"""
t_values = [t0]
y_values = [np.array(y0)]

t = t0
y = np.array(y0)

while t < t_end:
k1 = f(y, t, *args)
k2 = f(y + h*k1/2, t + h/2, *args)
k3 = f(y + h*k2/2, t + h/2, *args)
k4 = f(y + h*k3, t + h, *args)

y = y + h * (k1 + 2*k2 + 2*k3 + k4) / 6
t = t + h

t_values.append(t)
y_values.append(y.copy())

return np.array(t_values), np.array(y_values)

# Solve the Lorenz system
t0 = 0
y0 = [1, 1, 1]
t_end = 50
h = 0.01

t, y = rk4_system(lorenz_system, t0, y0, t_end, h)

# 3D plot
fig = plt.figure(figsize=(12, 5))

ax1 = fig.add_subplot(121, projection='3d')
ax1.plot(y[:, 0], y[:, 1], y[:, 2], lw=0.5)
ax1.set_xlabel('X')
ax1.set_ylabel('Y')
ax1.set_zlabel('Z')
ax1.set_title('Lorenz Attractor')

# Sensitivity test
y0_perturbed = [1.001, 1, 1] # Small perturbation
_, y_pert = rk4_system(lorenz_system, t0, y0_perturbed, t_end, h)

ax2 = fig.add_subplot(122)
ax2.plot(t, y[:, 0], 'b-', label='Original')
ax2.plot(t, y_pert[:, 0], 'r--', label='Perturbed (Δ x ₀=0.001)')
ax2.set_xlabel('Time')
ax2.set_ylabel('x(t)')
ax2.set_title('Butterfly Effect: Sensitivity to Initial Conditions')
ax2.legend()
ax2.set_xlim([0, 30])

plt.tight_layout()
plt.savefig('lorenz.png', dpi=150, bbox_inches='tight')
plt.show()

Symplectic and Geometric Integrators

Problem: Long-Time Simulation

For Hamiltonian systems (like celestial mechanics), traditional methods cause energy drift:

Symplectic Euler Method

Explicit symplectic Euler: Note thatis used to update— this "semi-implicit" structure preserves the symplectic structure.

St ö rmer-Verlet Method

More commonly used is the Leapfrog or Verlet method: This is a second-order method that preserves the symplectic structure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
def verlet_harmonic_oscillator(omega, q0, p0, t_end, h):
"""
Verlet method for harmonic oscillator
H = p ²/2 + ω² q ²/2
"""
t_values = [0]
q_values = [q0]
p_values = [p0]
energy_values = [p0**2/2 + omega**2*q0**2/2]

q, p = q0, p0
t = 0

while t < t_end:
# Half step update for momentum
p_half = p - h/2 * omega**2 * q
# Full step update for position
q = q + h * p_half
# Half step update for momentum
p = p_half - h/2 * omega**2 * q

t = t + h
t_values.append(t)
q_values.append(q)
p_values.append(p)
energy_values.append(p**2/2 + omega**2*q**2/2)

return (np.array(t_values), np.array(q_values),
np.array(p_values), np.array(energy_values))

# Compare energy conservation between RK4 and Verlet
omega = 1
q0, p0 = 1, 0
t_end = 100
h = 0.1

# Verlet
t_v, q_v, p_v, E_v = verlet_harmonic_oscillator(omega, q0, p0, t_end, h)

# RK4
def harmonic_ode(state, t, omega):
q, p = state
return np.array([p, -omega**2 * q])

t_rk, y_rk = rk4_system(harmonic_ode, 0, [q0, p0], t_end, h, args=(omega,))
E_rk = y_rk[:, 1]**2/2 + omega**2 * y_rk[:, 0]**2/2

fig, axes = plt.subplots(1, 2, figsize=(14, 5))

# Phase space trajectory
axes[0].plot(q_v, p_v, 'b-', alpha=0.7, label='Verlet')
axes[0].plot(y_rk[:, 0], y_rk[:, 1], 'r--', alpha=0.7, label='RK4')
circle = plt.Circle((0, 0), 1, fill=False, color='k', linestyle=':')
axes[0].add_patch(circle)
axes[0].set_xlabel('q')
axes[0].set_ylabel('p')
axes[0].set_title('Phase Space Trajectory')
axes[0].legend()
axes[0].set_aspect('equal')

# Energy vs time
E0 = p0**2/2 + omega**2*q0**2/2
axes[1].plot(t_v, (E_v - E0)/E0, 'b-', label='Verlet')
axes[1].plot(t_rk, (E_rk - E0)/E0, 'r-', label='RK4')
axes[1].set_xlabel('Time')
axes[1].set_ylabel('Relative energy error')
axes[1].set_title('Comparison of Energy Conservation')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('symplectic.png', dpi=150, bbox_inches='tight')
plt.show()

Convergence and Stability Theory

Convergence

Definition: A numerical method is convergent ifas.

Lax Equivalence Theorem: For linear difference schemes, consistency + stability = convergence.

Stability Region

For the test equation, define.

The method's stability region is the set offor which, whereis the amplification factor.

  • Euler method:, stability region is the disk
  • RK4:

A-Stability

A-stable: The entire left half-planeis within the stability region.

Unfortunately, there are no A-stable explicit Runge-Kutta methods (Dahlquist barrier).

Using SciPy to Solve ODEs

In practice, we typically use mature libraries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
from scipy.integrate import solve_ivp, odeint

# Method 1: odeint (old API, but simple)
def f(y, t):
return -2 * y

t = np.linspace(0, 5, 100)
y0 = 1
sol_odeint = odeint(f, y0, t)

# Method 2: solve_ivp (new API, more powerful)
def f_ivp(t, y): # Note different parameter order!
return -2 * y

sol_ivp = solve_ivp(f_ivp, [0, 5], [y0], t_eval=t, method='RK45')

# Available methods:
# 'RK45': Adaptive Runge-Kutta 4(5)
# 'RK23': Adaptive Runge-Kutta 2(3)
# 'DOP853': High-order Dormand-Prince
# 'Radau': Implicit Runge-Kutta (stiff problems)
# 'BDF': Backward Differentiation Formula (stiff problems)
# 'LSODA': Automatic stiffness detection

# Stiff problem example
def van_der_pol(t, y, mu=1000):
x, v = y
return [v, mu * (1 - x**2) * v - x]

sol_stiff = solve_ivp(van_der_pol, [0, 3000], [2, 0],
method='Radau', dense_output=True)

t_plot = np.linspace(0, 3000, 1000)
y_plot = sol_stiff.sol(t_plot)

plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(t_plot, y_plot[0])
plt.xlabel('t')
plt.ylabel('x')
plt.title('Van der Pol Oscillator (μ=1000)')

plt.subplot(122)
plt.plot(y_plot[0], y_plot[1])
plt.xlabel('x')
plt.ylabel('dx/dt')
plt.title('Phase Portrait')

plt.tight_layout()
plt.show()

Practical Application: Numerical Simulation of Epidemic Models

SIR Model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
def sir_model(t, y, beta, gamma, N):
S, I, R = y
dSdt = -beta * S * I / N
dIdt = beta * S * I / N - gamma * I
dRdt = gamma * I
return [dSdt, dIdt, dRdt]

# Parameter settings (using early COVID-19 data as reference)
N = 1e6 # Total population
I0 = 1 # Initial infected
R0_ratio = 2.5 # Basic reproduction number
gamma = 1/14 # Recovery rate (average 14 days to recover)
beta = R0_ratio * gamma # Transmission rate

# Solve
sol = solve_ivp(sir_model, [0, 200], [N-I0, I0, 0],
args=(beta, gamma, N),
t_eval=np.linspace(0, 200, 500),
method='RK45')

fig, axes = plt.subplots(1, 2, figsize=(14, 5))

axes[0].plot(sol.t, sol.y[0]/1e3, 'b-', label='Susceptible S')
axes[0].plot(sol.t, sol.y[1]/1e3, 'r-', label='Infected I')
axes[0].plot(sol.t, sol.y[2]/1e3, 'g-', label='Recovered R')
axes[0].set_xlabel('Days')
axes[0].set_ylabel('Population (thousands)')
axes[0].set_title(f'SIR Model (R ₀ = {R0_ratio})')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

# Comparison of different R0 values
R0_values = [1.5, 2.0, 2.5, 3.0]
for R0 in R0_values:
beta = R0 * gamma
sol = solve_ivp(sir_model, [0, 200], [N-I0, I0, 0],
args=(beta, gamma, N),
t_eval=np.linspace(0, 200, 500))
axes[1].plot(sol.t, sol.y[1]/1e3, label=f'R ₀ = {R0}')

axes[1].set_xlabel('Days')
axes[1].set_ylabel('Number of infected (thousands)')
axes[1].set_title('Effect of Basic Reproduction Number')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('sir_model.png', dpi=150, bbox_inches='tight')
plt.show()

Exercises

Basic Problems

Exercise 1: Use Euler's method to solve,on. Takeand compare with the exact solution.

Exercise 2: Implement Heun's method to solve,. Verify that it is second-order.

Exercise 3: Prove that RK4 is exact for(constants).

Exercise 4: Consider the stiff equation,. (a) Using forward Euler, find the maximum step size that guarantees stability (b) Using backward Euler, verify stability for any step size

Exercise 5: Implement the Adams-Bashforth two-step method and compare computational efficiency with RK4.

Intermediate Problems

Exercise 6: Implement an adaptive step size RK4 method to keep local error within a specified tolerance. Test its performance on.

Exercise 7: Use the Verlet method to simulate a double pendulum. Observe chaotic behavior.

Exercise 8: Consider the Lotka-Volterra predator-prey model:(a) Use RK4 to simulate population dynamics (b) Plot the phase portrait and observe periodic orbits

Exercise 9: Use numerical methods to verify: for a harmonic oscillator, symplectic integrators (like Verlet) have better energy conservation than RK4 in long-time simulation.

Exercise 10: Implement the Dormand-Prince method (RK5(4)) and compare with SciPy's solve_ivp.

Advanced Problems

Exercise 11: Study the performance of numerical methods on stiff problems. Consider:The exact solution is. (a) With explicit methods, explore stability restrictions (b) Implement the implicit trapezoidal method and observe unconditional stability

Exercise 12: Consider the two-body problem (Kepler problem):Use different methods (RK4, symplectic methods) for long-time orbit simulation and compare: (a) Preservation of orbital shape (b) Energy conservation (c) Angular momentum conservation

Exercise 13: Implement an exponential integrator for solving, whereis the stiff linear part.

Exercise 14: Study error accumulation. For chaotic systems (like the Lorenz system), errors grow exponentially. Design an experiment to quantify this growth rate.

Exercise 15: Implement a simple automatic stiffness detection algorithm: judge whether a problem is stiff based on the eigenvalues of the Jacobian and automatically select an appropriate solver.

Summary

In this chapter, we learned a complete system of numerical methods for ordinary differential equations:

Method Order Type Characteristics
Forward Euler 1 Explicit Simple, conditionally stable
Backward Euler 1 Implicit A-stable, suitable for stiff problems
Heun 2 Explicit Predictor-corrector
RK4 4 Explicit Engineering workhorse
RKF45 4(5) Explicit Adaptive step size
Adams-Bashforth Variable Explicit multistep Efficient
Adams-Moulton Variable Implicit multistep More stable
Verlet 2 Symplectic Structure-preserving

Principles for choosing methods:

  1. General problems: RK45 (adaptive)
  2. Stiff problems: BDF or Radau
  3. Long-time Hamiltonian systems: Symplectic integrators
  4. High precision requirements: High-order embedded RK

Practical advice:

  • First try scipy.integrate.solve_ivp
  • If too slow or unstable, consider switching methods
  • Always check convergence: see if results change when step size is reduced

In the next chapter, we will learn about boundary value problems— how to solve them when conditions are given not only at the initial point but also at the endpoint.

  • Post title:Ordinary Differential Equations (11): Numerical Methods
  • Post author:Chen Kai
  • Create time:2019-05-29 15:30:00
  • Post link:https://www.chenk.top/ode-chapter-11-numerical-methods/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments