Many differential equations arising in practical problems cannot be
solved by analytical methods — this is when numerical methods become our
most powerful weapon. From Euler's simple idea to modern adaptive
algorithms, numerical methods allow us to "approximately" solve
virtually any differential equation. This chapter explores in depth the
principles, implementations, and error analysis of various numerical
methods.
Why Do We Need Numerical
Methods?
Looking back at previous chapters, we learned many analytical
methods: separation of variables, integrating factors, variation of
parameters, Laplace transforms, and more. These methods are elegant, but
have a fatal limitation —they only work for specific types of
equations.
Consider this seemingly simple equation:
Try
solving it with any method we've learned — you'll find nothing works!
This equation has no solution in terms of elementary functions.
More generally, most differential equations encountered in
engineering and science cannot be solved analytically:
Nonlinear equations (like the Navier-Stokes equations)
Variable coefficient equations
High-dimensional systems
Problems with complex boundary conditions
The core idea of numerical methods: Since we cannot
obtain the exact solution,
let's compute approximate valuesat discrete points.
Euler's Method: The
Simplest Numerical Method
Basic Idea
Euler's method is the ancestor of all numerical methods, proposed by
Leonhard Euler in 1768. Its idea is extremely simple:
approximate a curve with its tangent line.
Consider the initial value problem:At point, the
slope of the tangent to the solution curve is. Walking a small stepalong this tangent, we get:This is
the Forward Euler formula.
Repeating this process:
Geometric Intuition
Imagine you're walking through a maze, but can only see a small patch
of ground beneath your feet. Your strategy is:
Check the slope of the terrain at your current position
(derivative)
Take a small step in that direction
Repeat
This is the essence of Euler's method —local
linearization.
Local Truncation Error (LTE): Error introduced in a
single step
Through Taylor expansion:The
Euler formula gives, so:
Global Truncation Error (GTE): Accumulated error
fromtoAftersteps:This means Euler's method is a first-order
method: halving the step size halves the error.
Stability Analysis
Consider the test equation, where(decay problem).
The Euler formula gives:For the numerical solution
not to diverge, we need, i.e.:Since, the
stability condition is:This is called conditional
stability— the step size must be small enough to ensure
stability.
Backward Euler Method
Implicit Scheme
The Backward Euler method uses the derivative value at the
next time step:Note thatappears on both sides of the
equation — this is an implicit equation!
Why Do We Need Implicit
Methods?
For stiff problems, explicit methods require
extremely small step sizes for stability, while implicit methods can use
large step sizes.
Characteristic of stiff problems: The system contains both
fast and slow varying components. For
example, in chemical reaction kinetics, some reactions complete in
microseconds while others take hours.
Stability Analysis
For the test equation, Backward Euler gives:The amplification factor is. When, for anywe have.
This is called A-stable or unconditionally
stable.
defbackward_euler(f, df_dy, x0, y0, x_end, h, tol=1e-10, max_iter=100): """ Backward Euler method (using Newton iteration to solve the implicit equation) Parameters: f: Function f(x, y) df_dy: Partial derivative of f with respect to y x0, y0: Initial conditions x_end: Integration endpoint h: Step size tol: Newton iteration tolerance max_iter: Maximum number of iterations """ x_values = [x0] y_values = [y0] x, y = x0, y0 while x < x_end: x_new = x + h # Newton iteration to solve y_new = y + h*f(x_new, y_new) y_new = y # Initial guess for _ inrange(max_iter): g = y_new - y - h * f(x_new, y_new) dg = 1 - h * df_dy(x_new, y_new) y_new_next = y_new - g / dg ifabs(y_new_next - y_new) < tol: y_new = y_new_next break y_new = y_new_next x, y = x_new, y_new x_values.append(x) y_values.append(y) return np.array(x_values), np.array(y_values)
# Stiff problem example: y' = -1000(y - cos(x)) - sin(x), y(0) = 1 # This problem has a rapidly decaying transient to cos(x) deff_stiff(x, y): return -1000 * (y - np.cos(x)) - np.sin(x)
The general form of RK methods is:where:The parameters,,are determined by order
conditions— requiring that the Taylor expansion of the
numerical solution matches that of the true solution up to a certain
order.
For RK4, there are 11 conditions to satisfy, but only 13 free
parameters, so the solution is not unique. The classical RK4 is just one
such choice.
Multistep Methods
Idea: Utilize Historical
Information
Runge-Kutta methods are single-step methods— each
step only uses. But we've
already computed so many historical points; why not use them?
Multistep methods use multiple historical points to
improve accuracy or efficiency.
Adams-Bashforth Methods
(Explicit)
Two-step Adams-Bashforth (AB2):
Four-step Adams-Bashforth (AB4):
Adams-Moulton Methods
(Implicit)
Two-step Adams-Moulton (AM2):This is implicit becausecontains the
unknown.
Predictor-Corrector Methods
In practice, predictor-corrector combinations are
often used:
Use Adams-Bashforth to predict${n+1}{n+1} = f(x_{n+1},
{n+1})y{n+1}$
defsir_model(t, y, beta, gamma, N): S, I, R = y dSdt = -beta * S * I / N dIdt = beta * S * I / N - gamma * I dRdt = gamma * I return [dSdt, dIdt, dRdt]
# Parameter settings (using early COVID-19 data as reference) N = 1e6# Total population I0 = 1# Initial infected R0_ratio = 2.5# Basic reproduction number gamma = 1/14# Recovery rate (average 14 days to recover) beta = R0_ratio * gamma # Transmission rate
Exercise 1: Use Euler's method to solve,on. Takeand
compare with the exact solution.
Exercise 2: Implement Heun's method to solve,. Verify that it is
second-order.
Exercise 3: Prove that RK4 is exact for(constants).
Exercise 4: Consider the stiff equation,. (a) Using forward Euler, find
the maximum step size that guarantees stability (b) Using backward
Euler, verify stability for any step size
Exercise 5: Implement the Adams-Bashforth two-step
method and compare computational efficiency with RK4.
Intermediate Problems
Exercise 6: Implement an adaptive step size RK4
method to keep local error within a specified tolerance. Test its
performance on.
Exercise 7: Use the Verlet method to simulate a
double pendulum. Observe chaotic behavior.
Exercise 8: Consider the Lotka-Volterra
predator-prey model:(a) Use
RK4 to simulate population dynamics (b) Plot the phase portrait and
observe periodic orbits
Exercise 9: Use numerical methods to verify: for a
harmonic oscillator, symplectic integrators (like Verlet) have better
energy conservation than RK4 in long-time simulation.
Exercise 10: Implement the Dormand-Prince method
(RK5(4)) and compare with SciPy's solve_ivp.
Advanced Problems
Exercise 11: Study the performance of numerical
methods on stiff problems. Consider:The exact solution is. (a) With explicit methods,
explore stability restrictions (b) Implement the implicit trapezoidal
method and observe unconditional stability
Exercise 12: Consider the two-body problem (Kepler
problem):Use different methods
(RK4, symplectic methods) for long-time orbit simulation and compare:
(a) Preservation of orbital shape (b) Energy conservation (c) Angular
momentum conservation
Exercise 13: Implement an exponential
integrator for solving, whereis the
stiff linear part.
Exercise 14: Study error
accumulation. For chaotic systems (like the Lorenz system),
errors grow exponentially. Design an experiment to quantify this growth
rate.
Exercise 15: Implement a simple automatic
stiffness detection algorithm: judge whether a problem is stiff
based on the eigenvalues of the Jacobian and automatically select an
appropriate solver.
Summary
In this chapter, we learned a complete system of numerical methods
for ordinary differential equations:
High precision requirements: High-order embedded RK
Practical advice:
First try scipy.integrate.solve_ivp
If too slow or unstable, consider switching methods
Always check convergence: see if results change when step size is
reduced
In the next chapter, we will learn about boundary value
problems— how to solve them when conditions are given not only
at the initial point but also at the endpoint.
Post title:Ordinary Differential Equations (11): Numerical Methods
Post author:Chen Kai
Create time:2019-05-29 15:30:00
Post link:https://www.chenk.top/ode-chapter-11-numerical-methods/
Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.