When you drive a car, you constantly adjust the steering wheel to
keep the vehicle centered in the lane; an air conditioner automatically
regulates cooling power based on temperature feedback; a rocket
precisely controls thrust to maintain orbit. These seemingly different
systems share a common mathematical foundation — control theory. In this
chapter, we explore how differential equations describe and design
control systems, from classical PID controllers to modern state-space
methods, seeing how mathematics helps us tame complex dynamical
systems.
Basic Concepts of Control
Theory
Open-Loop vs Closed-Loop
Control
Open-loop control: The control signal does not
depend on system output. - Example: A microwave heats for a set time -
Advantage: Simple - Disadvantage: Cannot handle disturbances
Closed-loop control (feedback control): The control
signal adjusts based on system output. - Example: Air conditioner
adjusts based on room temperature - Advantage: Can resist disturbances,
automatically corrects errors - Disadvantage: May become unstable
: Reference input
(setpoint) -: System output
-: Error -: Control input
Plant: The controlled object
Differential Equation
Description
Most physical systems can be described by ordinary differential
equations. For example, a mass-spring-damper system:whereis the
applied force (control input) andis the displacement (output).
Transfer Functions and
Laplace Transform
Laplace
Transform Review
Key properties: -
-This transforms differential equations into algebraic
equations!
Transfer Functions
For linear time-invariant systems, the transfer function is defined
as (with zero initial conditions):
# Comparison of different time constants ax2 = axes[1] tau_values = [0.5, 1.0, 2.0, 4.0] colors = plt.cm.viridis(np.linspace(0, 0.8, len(tau_values)))
for tau_val, color inzip(tau_values, colors): y = K * (1 - np.exp(-t_ode/tau_val)) ax2.plot(t_ode, y, color=color, linewidth=2, label=f'τ = {tau_val}')
ax2.axhline(y=K, color='k', linestyle='--', alpha=0.5) ax2.set_xlabel('Time', fontsize=12) ax2.set_ylabel('Output y(t)', fontsize=12) ax2.set_title('Effect of Time Constant', fontsize=14, fontweight='bold') ax2.legend(fontsize=11) ax2.grid(True, alpha=0.3)
for zeta, color inzip(zeta_values, colors): num = [K * omega_n**2] den = [1, 2*zeta*omega_n, omega_n**2] system = lti(num, den) t_out, y = step(system, T=t) ax1.plot(t_out, y, color=color, linewidth=2, label=f'ζ = {zeta}')
The PID controller is the most widely used controller in industry,
consisting of three components: -
P (Proportional): Proportional to error, provides fast
response but has steady-state error - I (Integral):
Eliminates steady-state error, but may cause oscillation - D
(Derivative): Predicts error trend, increases stability, but
sensitive to noise
defziegler_nichols_tuning(): """Ziegler-Nichols tuning demonstration""" # Plant defplant_dynamics(state, t, u, m=1.0, c=0.2, k=1.0): x, v = state return [v, (u - c*v - k*x)/m] # Find critical gain deffind_critical_gain(): Kp_test = np.linspace(0.1, 10, 100) oscillation_detected = [] for Kp in Kp_test: dt = 0.01 t = np.arange(0, 50, dt) y = [0.1, 0] # Initial disturbance y_history = [] for _ inrange(len(t)): error = 0 - y[0] # Target is 0 u = Kp * error y_new = odeint(plant_dynamics, y, [0, dt], args=(u,))[-1] y = list(y_new) y_history.append(y[0]) y_history = np.array(y_history) # Check for oscillation iflen(y_history) > 100: late_signal = y_history[-500:] amplitude_variation = np.max(late_signal) - np.min(late_signal) if amplitude_variation > 0.01: # Still has amplitude oscillation_detected.append((Kp, amplitude_variation)) return oscillation_detected # Demonstrate response for different Kp fig, axes = plt.subplots(2, 2, figsize=(14, 10)) Kp_values = [1.0, 3.0, 5.0, 8.0] dt = 0.01 t = np.arange(0, 30, dt) for ax, Kp inzip(axes.flat, Kp_values): y = [0.5, 0] y_history = [] for _ inrange(len(t)): error = 0 - y[0] u = Kp * error y_new = odeint(plant_dynamics, y, [0, dt], args=(u,))[-1] y = list(y_new) y_history.append(y[0]) ax.plot(t, y_history, 'b-', linewidth=2) ax.axhline(y=0, color='r', linestyle='--', alpha=0.5) ax.set_xlabel('Time', fontsize=12) ax.set_ylabel('Position', fontsize=12) ax.set_title(f'Kp = {Kp}', fontsize=12, fontweight='bold') ax.grid(True, alpha=0.3) plt.suptitle('Finding Critical Gain (Ziegler-Nichols)', fontsize=14, fontweight='bold') plt.tight_layout() plt.savefig('ziegler_nichols.png', dpi=150, bbox_inches='tight') plt.show()
ziegler_nichols_tuning()
State-Space Methods
State-Space Representation
Any linear time-invariant system can be written as:where: -: State vector -: Input vector -: Output vector -: System matrix -: Input matrix -: Output matrix -: Feedforward matrix
From Differential
Equations to State Space
Example:Define state:
Stability Analysis
The system is stable if and only if all eigenvalues ofhave negative real parts.
Lyapunov stability: If there exists a positive
definite matrixsuch that(positive definite), then the system is
stable.
defpole_placement_demo(): """Pole placement demonstration""" # Unstable system A = np.array([[0, 1], [2, 0]]) # Has positive eigenvalue B = np.array([[0], [1]]) C = np.array([[1, 0]]) # Original eigenvalues orig_poles = np.linalg.eigvals(A) print(f"Original poles: {orig_poles}") print(f"System unstable (has positive real part poles)") # Desired poles desired_poles = np.array([-2, -3]) # Compute feedback gain result = place_poles(A, B, desired_poles) K = result.gain_matrix print(f"\nDesired poles: {desired_poles}") print(f"Feedback gain K = {K}") # Verify A_cl = A - B @ K achieved_poles = np.linalg.eigvals(A_cl) print(f"Achieved poles: {achieved_poles}") # Simulation comparison fig, axes = plt.subplots(1, 2, figsize=(14, 5)) t = np.linspace(0, 5, 500) x0 = [1, 0] # Open-loop response defopen_loop(x, t): return A @ x sol_open = odeint(open_loop, x0, t) # Closed-loop response defclosed_loop(x, t): return (A - B @ K) @ x sol_closed = odeint(closed_loop, x0, t) ax1 = axes[0] ax1.plot(t, sol_open[:, 0], 'r-', linewidth=2, label='Open-loop (unstable)') ax1.plot(t, sol_closed[:, 0], 'b-', linewidth=2, label='Closed-loop (stabilized)') ax1.axhline(y=0, color='k', linestyle='--', alpha=0.3) ax1.set_xlabel('Time', fontsize=12) ax1.set_ylabel('Position x', fontsize=12) ax1.set_title('Stabilization via State Feedback', fontsize=14, fontweight='bold') ax1.legend(fontsize=11) ax1.grid(True, alpha=0.3) ax1.set_ylim(-5, 15) # Pole locations ax2 = axes[1] ax2.plot(orig_poles.real, orig_poles.imag, 'rx', markersize=15, mew=3, label='Open-loop poles') ax2.plot(achieved_poles.real, achieved_poles.imag, 'bo', markersize=12, label='Closed-loop poles') ax2.axhline(y=0, color='k', linewidth=0.5) ax2.axvline(x=0, color='k', linewidth=0.5) ax2.fill_between([-4, 0], [-3, -3], [3, 3], alpha=0.1, color='green', label='Stable region') ax2.set_xlabel('Real Part', fontsize=12) ax2.set_ylabel('Imaginary Part', fontsize=12) ax2.set_title('Pole Locations', fontsize=14, fontweight='bold') ax2.legend(fontsize=10) ax2.grid(True, alpha=0.3) ax2.set_xlim(-4, 3) ax2.set_ylim(-3, 3) plt.tight_layout() plt.savefig('pole_placement.png', dpi=150, bbox_inches='tight') plt.show()
pole_placement_demo()
LQR Optimal Control
Problem Formulation
The Linear Quadratic Regulator (LQR) finds the control law that
minimizes the following cost function: -: State weight matrix
(positive semi-definite) -:
Control weight matrix (positive definite)
Optimal Solution
The optimal control law is state feedback:where, andis the solution to the algebraic Riccati
equation:
deflqr_demo(): """LQR controller demonstration""" # Linearized inverted pendulum model M = 1.0# Cart mass m = 0.1# Pendulum mass l = 0.5# Pendulum length g = 9.81# Gravitational acceleration # State: [x, x_dot, theta, theta_dot] A = np.array([ [0, 1, 0, 0], [0, 0, -m*g/M, 0], [0, 0, 0, 1], [0, 0, (M+m)*g/(M*l), 0] ]) B = np.array([[0], [1/M], [0], [-1/(M*l)]]) # Open-loop eigenvalues open_poles = np.linalg.eigvals(A) print(f"Open-loop poles: {open_poles}") print(f"System unstable: {any(p.real > 0for p in open_poles)}") # LQR design Q = np.diag([10, 1, 100, 10]) # State weights R = np.array([[1]]) # Control weight # Solve Riccati equation P = solve_continuous_are(A, B, Q, R) K = np.linalg.inv(R) @ B.T @ P print(f"\nLQR gain K = {K.flatten()}") # Closed-loop poles A_cl = A - B @ K closed_poles = np.linalg.eigvals(A_cl) print(f"Closed-loop poles: {closed_poles}") # Simulation definverted_pendulum(x, t, K, A, B): u = -K @ x return (A @ x + B @ u.T).flatten() t = np.linspace(0, 10, 1000) x0 = [0, 0, 0.2, 0] # Initial angle deviation sol = odeint(inverted_pendulum, x0, t, args=(K, A, B)) u_history = [-K @ sol[i] for i inrange(len(t))] fig, axes = plt.subplots(2, 2, figsize=(14, 10)) # Position ax1 = axes[0, 0] ax1.plot(t, sol[:, 0], 'b-', linewidth=2) ax1.set_xlabel('Time (s)', fontsize=12) ax1.set_ylabel('Cart Position (m)', fontsize=12) ax1.set_title('Cart Position', fontsize=12, fontweight='bold') ax1.grid(True, alpha=0.3) # Angle ax2 = axes[0, 1] ax2.plot(t, sol[:, 2]*180/np.pi, 'r-', linewidth=2) ax2.set_xlabel('Time (s)', fontsize=12) ax2.set_ylabel('Pendulum Angle (deg)', fontsize=12) ax2.set_title('Pendulum Angle', fontsize=12, fontweight='bold') ax2.grid(True, alpha=0.3) # Control force ax3 = axes[1, 0] ax3.plot(t, [u[0][0] for u in u_history], 'g-', linewidth=2) ax3.set_xlabel('Time (s)', fontsize=12) ax3.set_ylabel('Control Force (N)', fontsize=12) ax3.set_title('Control Input', fontsize=12, fontweight='bold') ax3.grid(True, alpha=0.3) # Pole comparison ax4 = axes[1, 1] ax4.plot(open_poles.real, open_poles.imag, 'rx', markersize=15, mew=3, label='Open-loop') ax4.plot(closed_poles.real, closed_poles.imag, 'bo', markersize=12, label='Closed-loop (LQR)') ax4.axhline(y=0, color='k', linewidth=0.5) ax4.axvline(x=0, color='k', linewidth=0.5) ax4.fill_between([-10, 0], [-10, -10], [10, 10], alpha=0.1, color='green') ax4.set_xlabel('Real Part', fontsize=12) ax4.set_ylabel('Imaginary Part', fontsize=12) ax4.set_title('Pole Locations', fontsize=12, fontweight='bold') ax4.legend(fontsize=10) ax4.grid(True, alpha=0.3) ax4.set_xlim(-10, 5) ax4.set_ylim(-10, 10) plt.suptitle('LQR Control of Inverted Pendulum', fontsize=14, fontweight='bold') plt.tight_layout() plt.savefig('lqr_control.png', dpi=150, bbox_inches='tight') plt.show()
lqr_demo()
Observer Design
Why Do We Need Observers?
In practical systems, not all states can be directly measured. An
observer (or state estimator) reconstructs states from
measurable inputs and outputs.
Luenberger
Observerwhereis the estimated state andis the observer gain.
Estimation error dynamics:By
choosing, the error convergence
rate can be arbitrarily assigned.
Separation Principle
Theorem: For linear systems, the state feedback
controller and observer can be designed independently. The closed-loop
poles are the union of controller poles and observer poles.
In this chapter, we covered the core content of control theory:
Transfer functions: Transform differential
equations into algebraic problems
PID control: The most commonly used controller in
industry
State-space methods: Foundation of modern control
theory
Controllability and observability: Can the system
be controlled and observed
Pole placement: Arbitrarily assign closed-loop
poles
LQR optimal control: Optimal control minimizing a
cost function
Observer design: Reconstruct states from
outputs
Frequency domain analysis: Understand systems from
frequency response perspective
Control theory elevates differential equations from a descriptive
tool to a design tool — we can not only analyze systems but also
actively design system behavior. From aircraft autopilots to smartphone
image stabilization, control theory applications are ubiquitous.
Exercises
Basic Problems
Find the step response of the first-order systemand plot the
curve.
For the second-order system:
Calculate the natural frequencyand damping ratio - Determine if the system is
underdamped, critically damped, or overdamped
Calculate the step response overshoot
Design a PID controller for the following system to achieve:
overshoot < 10%, settling time < 2 seconds:4. Convert the
following differential equation to state-space form:5. Determine the controllability and observability of the
following system:
Advanced Problems
Cascade control: Design a cascade PID controller
for a temperature system:
Inner loop: Heater power control
Outer loop: Temperature control Compare performance of single-loop
and cascade control.
System identification: Given step response data,
estimate the parametersandof a first-order system.
Prove that for the LQR problem, ifis controllable andis observable, then the closed-loop system is
asymptotically stable.
Discretization: Discretize the continuous-time
systemto. Discuss the
choice of sampling period.
Design a Luenberger observer with error convergence rate 3 times
faster than the controller poles.
Programming Problems
Implement a general PID controller class including:
Anti-windup
Derivative filtering
Output limiting
Write a program for automatic Ziegler-Nichols parameter
tuning.
Implement a complete inverted pendulum simulator including:
Nonlinear dynamics
LQR controller
State observer
Visualization animation
Train a neural network controller using reinforcement learning
(e.g., PPO) and compare performance with LQR.
Implement a digital filter design tool supporting Butterworth,
Chebyshev, and other types.
Discussion Questions
PID controllers are so simple — why are they still the most
commonly used controllers in industry?
Discuss the effect of model uncertainty on controller
performance. How can robust controllers be designed?
Compare the advantages and disadvantages of pole placement and
LQR. When should each method be used?
Why are observer poles typically designed to be faster than
controller poles?
Discuss the relationship between machine learning methods (such
as deep reinforcement learning) and traditional control theory, and
their respective application scenarios.
References
Ogata, K. (2010). Modern Control Engineering. Prentice
Hall.
Franklin, G. F., Powell, J. D., & Emami-Naeini, A. (2015).
Feedback Control of Dynamic Systems. Pearson.
Å str ö m, K. J., & Murray, R. M. (2008). Feedback
Systems: An Introduction for Scientists and Engineers. Princeton
University Press.
Dorf, R. C., & Bishop, R. H. (2017). Modern Control
Systems. Pearson.
Skogestad, S., & Postlethwaite, I. (2005). Multivariable
Feedback Control: Analysis and Design. Wiley.
Lewis, F. L., Vrabie, D., & Syrmos, V. L. (2012). Optimal
Control. Wiley.
Khalil, H. K. (2002). Nonlinear Systems. Prentice
Hall.