Reinforcement Learning (3): Policy Gradient and Actor-Critic Methods
Chen Kai BOSS

If value function methods learn policies indirectly by "evaluating action quality," then policy gradient methods directly optimize the policy itself. DQN's success proved deep learning's tremendous potential in reinforcement learning, but its limitations are also obvious — it can only handle discrete action spaces and struggles with continuous control tasks like robot control and autonomous driving. Policy Gradient methods parameterize policies as neural networksand use gradient ascent to directly maximize expected returns, naturally supporting continuous actions. From the earliest REINFORCE algorithm to Actor-Critic architectures combining value functions, from asynchronous parallel A3C to breakthrough DDPG, from sample-efficient TD3 to industrially widespread PPO, to SAC under the maximum entropy framework — policy gradient methods have become the mainstream technical approach in deep reinforcement learning. This chapter systematically traces this evolution path, deeply analyzing each algorithm's design motivations, mathematical principles, and implementation details.

Policy Gradient Fundamentals: Direct Policy Optimization

Why Policy Gradient is Needed

In Chapter 2, we saw DQN learn Q-functionand greedily select actions to obtain a policy. This indirect approach has several problems:

Problem 1: Only Handles Discrete Actions

DQN needs to compute, which is simple in discrete spaces (like Atari's 18 actions) by enumeration, but in continuous spaces (like robot joint angles) requires solving an optimization problem — computationally expensive and imprecise.

Problem 2: Exploration Dilemma of Deterministic Policies

Greedy policyis completely deterministic; exploration can only rely on heuristics like-greedy, lacking principled guidance.

Problem 3: Accumulation of Value Function Approximation Errors

In high-dimensional state spaces, Q-function approximation errors accumulate and amplify through theoperation (like the overestimation problem in Chapter 2), affecting final policy quality.

Problem 4: Cannot Represent Stochastic Policies

The optimal policy for some problems is inherently stochastic. A classic example is rock-paper-scissors — deterministic policies can inevitably be exploited by opponents; the optimal policy is uniformly random.

Policy Gradient methods circumvent these problems by directly parameterizing policy: - Policies can output action probability distributions (discrete) or distribution parameters (continuous), naturally supporting stochastic policies - For continuous actions, typically use Gaussian distribution, directly outputting mean and variance - Optimization objective is expected return, directly optimized through gradient ascent

Policy Gradient Theorem

Let policybe controlled by parameter; the goal is to maximize expected return:whereis the state distribution induced by policy,is a trajectory.

Intuitively, we want to take the gradient with respect to:. But the problem isinside the expectation depends on (when policy changes, trajectory distribution changes), so we can't simply interchange gradient and expectation.

The Policy Gradient Theorem (Sutton et al., 2000) gives the exact gradient expression:This formula is very elegant: -is the score function, measuring how parameter changes affect the probability of selecting -is the long-term value of that action - Their product: actions with high value increase probability; actions with low value decrease probability

More elegantly, the gradient doesn't depend on state transition probability— even if the environment model is unknown, as long as we can sample trajectories, we can estimate the gradient.

Derivation: From Trajectory Distribution to Policy Gradient

Complete derivation requires some techniques. First, write the objective as trajectory distribution:whereis trajectory probability.

Take derivative with respect to:Using log-derivative trick:, we get:Expand:When taking derivative with respect to,anddon't depend on, so they disappear:Substituting:This is REINFORCE algorithm form. Further, noting that action at timeonly affects rewards after (causality), we can replacewith, obtaining:whereis precisely an unbiased estimate of. This is the policy gradient theorem.

Baseline: Reducing Variance

One problem with policy gradients is high variance. Considermight range from -100 to +100, causing very unstable gradient estimates.

A simple but effective trick is subtracting a baseline:As long asdoesn't depend on, the gradient's expectation is unchanged (because), but variance can be greatly reduced.

The most commonly used baseline is the state value function, at which point:is called the advantage function. Intuitively,is the "average" value of that state,measures how much better actionis than average. Usinginstead ofonly reinforces "better than average" actions, avoiding increasing probabilities of all actions (even poor ones).

REINFORCE Algorithm: Monte Carlo Policy Gradient

Algorithm Flow

REINFORCE (Williams, 1992) is the simplest policy gradient algorithm:


Algorithm: REINFORCE

  1. Randomly initialize policy parameters

  2. for episodedo 3.Generate trajectoryfollowing$t = 0, 1, , T-1G_t = {t'=t}^{T-1} ^{t'-t} r_{t'}J = (a_t|s_t) G_t+ J$

  3. end for


Key points: - Line 3: Sample complete trajectory (on-policy, must use current policy) - Line 5: Discounted return fromto termination (Monte Carlo estimate) - Line 6: Policy gradient formula - Line 8: Gradient ascent update

REINFORCE with Baseline

Adding state value functionas baseline:


Algorithm: REINFORCE with Baseline

  1. Initialize policy parametersand value function parameters

  2. for episodedo 3.Generate trajectoryfollowing$t = 0, 1, , T-1A_t G_t - V(s_t)J = (a_t|s_t) A_tV = (G_t - V(s_t))^2+ J- __V$

  3. end for


Line 8 trains the value function with mean squared error to approximate true return.

Code Implementation: CartPole

Implementing REINFORCE with PyTorch to solve CartPole:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
import gym
import numpy as np

class PolicyNetwork(nn.Module):
"""Policy network: outputs action probability distribution"""
def __init__(self, state_dim, action_dim, hidden_dim=128):
super(PolicyNetwork, self).__init__()
self.fc1 = nn.Linear(state_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.fc3 = nn.Linear(hidden_dim, action_dim)

def forward(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
action_logits = self.fc3(x)
return F.softmax(action_logits, dim=-1)

class ValueNetwork(nn.Module):
"""Value network: outputs state value"""
def __init__(self, state_dim, hidden_dim=128):
super(ValueNetwork, self).__init__()
self.fc1 = nn.Linear(state_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.fc3 = nn.Linear(hidden_dim, 1)

def forward(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
return self.fc3(x)

def compute_returns(rewards, gamma=0.99):
"""Compute discounted returns"""
returns = []
R = 0
for r in reversed(rewards):
R = r + gamma * R
returns.insert(0, R)
return returns

def reinforce_with_baseline(env_name='CartPole-v1', episodes=1000, gamma=0.99):
env = gym.make(env_name)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.n

# Initialize networks
policy_net = PolicyNetwork(state_dim, action_dim)
value_net = ValueNetwork(state_dim)

policy_optimizer = optim.Adam(policy_net.parameters(), lr=3e-4)
value_optimizer = optim.Adam(value_net.parameters(), lr=1e-3)

episode_rewards = []

for episode in range(episodes):
state = env.reset()
log_probs = []
values = []
rewards = []

# Sample trajectory
while True:
state_tensor = torch.FloatTensor(state).unsqueeze(0)

# Policy network selects action
action_probs = policy_net(state_tensor)
dist = Categorical(action_probs)
action = dist.sample()

# Value network estimates state value
value = value_net(state_tensor)

# Execute action
next_state, reward, done, _ = env.step(action.item())

# Record
log_probs.append(dist.log_prob(action))
values.append(value)
rewards.append(reward)

state = next_state

if done:
break

# Compute returns
returns = compute_returns(rewards, gamma)
returns = torch.FloatTensor(returns)

# Convert to tensors
log_probs = torch.stack(log_probs)
values = torch.cat(values)

# Compute advantages
advantages = returns - values.detach()

# Policy loss (negative for gradient ascent)
policy_loss = -(log_probs * advantages).mean()

# Value loss
value_loss = F.mse_loss(values, returns)

# Update
policy_optimizer.zero_grad()
policy_loss.backward()
policy_optimizer.step()

value_optimizer.zero_grad()
value_loss.backward()
value_optimizer.step()

episode_rewards.append(sum(rewards))

if (episode + 1) % 100 == 0:
avg_reward = np.mean(episode_rewards[-100:])
print(f"Episode {episode+1}, Avg Reward: {avg_reward:.2f}")

return policy_net, value_net, episode_rewards

# Run training
policy_net, value_net, rewards = reinforce_with_baseline(episodes=1000)

# Visualization
import matplotlib.pyplot as plt

plt.figure(figsize=(10, 5))
plt.plot(rewards, alpha=0.3)
plt.plot(np.convolve(rewards, np.ones(50)/50, mode='valid'), linewidth=2)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
plt.title('REINFORCE with Baseline on CartPole')
plt.grid(True)
plt.show()

This code typically solves CartPole within 100-200 episodes (average reward > 195).

REINFORCE Pros and Cons

Pros: - Simple and intuitive, easy to implement - Unbiased gradient estimate (using true return) - Naturally supports continuous action spaces (just change policy network output distribution)

Cons: - High variance:has large randomness, even with baseline - Low sample efficiency: on-policy, each trajectory used only once - Training instability: learning curve oscillates severely

These drawbacks motivated researchers to explore more advanced methods.

Actor-Critic Architecture: Combining Policy and Value

From REINFORCE to Actor-Critic

REINFORCE uses complete returnto estimate Q-value, which is Monte Carlo method with high variance. Can we use temporal difference (TD) to reduce variance?

Recall policy gradient:In REINFORCE,is estimated with. Actor-Critic's idea is: use a neural networkorto approximate Q-value, then train this network with TD methods.

Architecture splits into two parts: - Actor: policy network, responsible for selecting actions - Critic: value networkor, responsible for evaluating actions

Actor updates based on Critic's feedback, Critic updates based on environment rewards. This "actor-critic" interaction is where the name comes from.

Advantage Actor-Critic (A2C)

Using state value functionas Critic, advantage function estimated as:This is 1-step TD error. Compared to, it has lower variance (depends only on one-step transition) but introduces bias (becauseis approximate).

Complete algorithm:


Algorithm: Advantage Actor-Critic (A2C)

  1. Initialize Actor parametersand Critic parameters

  2. for stepdo 3.Observe state$s_ta_t (|s_t)a_tr_t, s_{t+1}t = r_t + V(s_{t+1}) - V_(s_t)- t^2+ (a_t|s_t) _t$

  3. end for


Note: - Line 6 uses TD error as advantage estimate - Line 7 minimizes value function's TD error - Line 8 uses TD error to guide policy update

A3C: Asynchronous Advantage Actor-Critic

A3C (Asynchronous Advantage Actor-Critic, Mnih et al., 2016) is parallel version of A2C, first on-policy algorithm to compete with DQN on Atari.

Core idea: Run multiple environment instances in parallel, each worker samples independently and computes gradients, asynchronously updating shared global parameters.

Why is parallelization effective? - Break sample correlation: experiences from different workers come from different states, reduced correlation - Accelerate training: multi-core CPU can sample simultaneously, GPU for network updates - Exploration diversity: different workers can use different exploration strategies (like different)

Pseudocode (simplified version):


Algorithm: A3C (Single Worker)

  1. Global parametersshared
  2. Worker: 3.Initialize local parameters$', '' , ' n(s_t, a_t, r_t){t=1}^nR_t = {k=0}^{n-1} ^k r_{t+k} + ^n V_{'}(s_{t+n})A_t = R_t - V_{'}(s_t)J = t {'} {'}(a_t|s_t) A_t_V = t {'} (R_t - V{'}(s_t))^2, $end loop

A3C achieved great success in 2016, reaching performance close to DQN on Atari with faster training (leveraging multi-core CPU). But it has drawbacks: asynchronous updates may cause stale gradients (when one worker computes gradient, global parameters already modified by others), affecting stability.

Modern implementations typically use synchronous version A2C (removing "Asynchronous"), sampling in parallel across multiple environments and uniformly updating parameters, avoiding asynchrony problems.

Code Implementation: A2C Multi-Environment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
import gym
import numpy as np

class ActorCritic(nn.Module):
"""Actor-Critic network with shared parameters"""
def __init__(self, state_dim, action_dim, hidden_dim=128):
super(ActorCritic, self).__init__()
self.fc1 = nn.Linear(state_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)

# Actor head
self.actor = nn.Linear(hidden_dim, action_dim)

# Critic head
self.critic = nn.Linear(hidden_dim, 1)

def forward(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))

action_probs = F.softmax(self.actor(x), dim=-1)
state_value = self.critic(x)

return action_probs, state_value

def a2c_multi_env(env_name='CartPole-v1', n_envs=8, n_steps=5,
episodes=500, gamma=0.99):
# Create multiple environments
envs = [gym.make(env_name) for _ in range(n_envs)]
state_dim = envs[0].observation_space.shape[0]
action_dim = envs[0].action_space.n

# Initialize network
model = ActorCritic(state_dim, action_dim)
optimizer = optim.Adam(model.parameters(), lr=3e-4)

# Initialize states
states = [env.reset() for env in envs]
episode_rewards = [0] * n_envs
all_rewards = []

for step in range(episodes * 200 // n_envs): # Total steps
# Store n-step experience
log_probs_list = []
values_list = []
rewards_list = []
dones_list = []

for _ in range(n_steps):
states_tensor = torch.FloatTensor(states)

# Forward pass
action_probs, values = model(states_tensor)
dist = Categorical(action_probs)
actions = dist.sample()

log_probs_list.append(dist.log_prob(actions))
values_list.append(values.squeeze())

# Execute actions
next_states = []
rewards = []
dones = []

for i, (env, action) in enumerate(zip(envs, actions)):
next_state, reward, done, _ = env.step(action.item())

episode_rewards[i] += reward
rewards.append(reward)
dones.append(done)

if done:
all_rewards.append(episode_rewards[i])
episode_rewards[i] = 0
next_state = env.reset()

next_states.append(next_state)

rewards_list.append(torch.FloatTensor(rewards))
dones_list.append(torch.FloatTensor(dones))
states = next_states

# Compute n-step return
with torch.no_grad():
next_states_tensor = torch.FloatTensor(states)
_, next_values = model(next_states_tensor)
next_values = next_values.squeeze()

returns = next_values
advantage_list = []

for t in reversed(range(n_steps)):
returns = rewards_list[t] + gamma * returns * (1 - dones_list[t])
advantage = returns - values_list[t]
advantage_list.insert(0, advantage)

# Compute loss
log_probs = torch.stack(log_probs_list)
values = torch.stack(values_list)
advantages = torch.stack(advantage_list)
returns_all = values + advantages

actor_loss = -(log_probs * advantages.detach()).mean()
critic_loss = F.mse_loss(values, returns_all.detach())

# Entropy regularization (encourage exploration)
entropy = -(action_probs * torch.log(action_probs + 1e-8)).sum(dim=-1).mean()

loss = actor_loss + 0.5 * critic_loss - 0.01 * entropy

# Update
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=0.5)
optimizer.step()

if (step + 1) % 100 == 0 and len(all_rewards) > 0:
print(f"Step {step+1}, Avg Reward: {np.mean(all_rewards[-100:]):.2f}")

return model, all_rewards

# Run
model, rewards = a2c_multi_env(n_envs=8, episodes=500)

This implementation uses 8 parallel environments, sampling 5 steps each time (n-step return), greatly improving sample efficiency and training speed.

Continuous Control: DDPG and TD3

From Discrete to Continuous: Challenges

Previous algorithms (REINFORCE, A2C) all target discrete action spaces. For continuous actions(like robot joint angles, steering wheel angle in autonomous driving), policies typically model as Gaussian distribution:Network outputs meanand standard deviation, sample action.

But this stochastic policy has a problem: in some tasks (like precise control), the optimal policy may be deterministic. Sampling each time introduces unnecessary noise, degrading performance.

DDPG (Deep Deterministic Policy Gradient) idea: learn a deterministic policy, directly outputting action, no sampling needed.

DDPG: Deterministic Policy Gradient

DDPG (Lillicrap et al., 2016) combines ideas from DQN and Actor-Critic: - Like DQN, uses experience replay and target networks - Like Actor-Critic, separates policy (Actor) and value (Critic)

Deterministic Policy Gradient Theorem (Silver et al., 2014) states, for deterministic policy, gradient is:Intuition: value functiontells us "how good is actionin state", we want policy-outputted actionto move in direction that increases.is Q's gradient with respect to action, pointing toward Q increase;is policy's gradient with respect to parameters, chain rule connects them.

DDPG Algorithm:


Algorithm: DDPG

  1. Initialize Actor, Critic$Q_{'}Q{'}' , ' $

  2. for episodedo 5.Initialize random exploration noise$s_1t = 1, 2, , Ta_t = (s_t) + ta_tr_t, s{t+1}(s_t, a_t, r_t, s{t+1})(s_i, a_i, r_i, s'i)y_i = r_i + Q{'}(s'i, {'}(s'i)) = i (y_i - Q(s_i, a_i))^2J i (s_i) a Q(s_i, a)|{a=_(s_i)}$ 16.end for

  3. end for


Key points: - Line 8: Deterministic policy + exploration noise (typically Ornstein-Uhlenbeck process) - Line 12: Target network computes TD target, note action also from target network - Line 15: Soft update, updating a bit each step (), smoother than DQN's hard update

TD3: Twin Delayed DDPG

DDPG has a serious problem: Q-value overestimation. Reason similar to DQN — targetwhereis selected by Actor, and Actor training depends on Q-value; the two mutually reinforce, causing Q-values to spiral upward.

TD3 (Twin Delayed DDPG, Fujimoto et al., 2018) introduces three tricks to mitigate this:

Trick 1: Clipped Double Q-Learning

Learn two Critics, take smaller value as target:Intuition: probability of both independent estimates overestimating is lower, taking minimum is conservative, suppressing overestimation.

Trick 2: Delayed Policy Updates

Actor updates less frequently than Critic — update Actor once for every 2 Critic updates. Reason: Critic needs to first converge to accurate Q-value before Actor can optimize based on accurate Q-value. If updated synchronously, Actor might exploit Critic's errors, learning wrong policy.

Trick 3: Target Policy Smoothing

When computing target, add noise to target action: Intuition: smoothed target won't fluctuate drastically due to single action's Q-value anomaly, like regularization.

Complete algorithm too long, here's core code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
class TD3:
def __init__(self, state_dim, action_dim, max_action):
# Actor
self.actor = Actor(state_dim, action_dim, max_action)
self.actor_target = Actor(state_dim, action_dim, max_action)
self.actor_target.load_state_dict(self.actor.state_dict())
self.actor_optimizer = optim.Adam(self.actor.parameters(), lr=3e-4)

# Two Critics
self.critic_1 = Critic(state_dim, action_dim)
self.critic_2 = Critic(state_dim, action_dim)
self.critic_1_target = Critic(state_dim, action_dim)
self.critic_2_target = Critic(state_dim, action_dim)
self.critic_1_target.load_state_dict(self.critic_1.state_dict())
self.critic_2_target.load_state_dict(self.critic_2.state_dict())
self.critic_optimizer = optim.Adam(
list(self.critic_1.parameters()) + list(self.critic_2.parameters()),
lr=3e-4
)

self.max_action = max_action
self.total_it = 0

def select_action(self, state):
state = torch.FloatTensor(state).unsqueeze(0)
return self.actor(state).cpu().data.numpy().flatten()

def train(self, replay_buffer, batch_size=256):
self.total_it += 1

# Sample
state, action, reward, next_state, done = replay_buffer.sample(batch_size)

with torch.no_grad():
# Target policy smoothing
noise = (torch.randn_like(action) * 0.2).clamp(-0.5, 0.5)
next_action = (self.actor_target(next_state) + noise).clamp(-self.max_action, self.max_action)

# Clipped Double Q-Learning
target_Q1 = self.critic_1_target(next_state, next_action)
target_Q2 = self.critic_2_target(next_state, next_action)
target_Q = torch.min(target_Q1, target_Q2)
target_Q = reward + (1 - done) * 0.99 * target_Q

# Update Critic
current_Q1 = self.critic_1(state, action)
current_Q2 = self.critic_2(state, action)
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)

self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()

# Delayed policy updates
if self.total_it % 2 == 0:
# Update Actor
actor_loss = -self.critic_1(state, self.actor(state)).mean()

self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()

# Soft update target networks
for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
target_param.data.copy_(0.005 * param.data + 0.995 * target_param.data)

for param, target_param in zip(self.critic_1.parameters(), self.critic_1_target.parameters()):
target_param.data.copy_(0.005 * param.data + 0.995 * target_param.data)

for param, target_param in zip(self.critic_2.parameters(), self.critic_2_target.parameters()):
target_param.data.copy_(0.005 * param.data + 0.995 * target_param.data)

TD3 surpassed DDPG on MuJoCo continuous control tasks, becoming the baseline for off-policy continuous control.

Trust Region Methods: TRPO and PPO

Policy Update Dilemma

Policy gradient methods have a fundamental problem: learning rate is hard to tune. Too small, training is slow; too large, new policy may be much worse than old (gradient is only local information), causing performance collapse.

One improvement idea: limit each update's step size, ensure new policy isn't "too far" from old. But how to measure "distance"? Euclidean distanceisn't suitable, because parameter space distance doesn't equal policy space distance.

Trust region methods measure policy distance with KL divergence:and constrain(like).

TRPO: Rigorous Trust Region Optimization

TRPO (Trust Region Policy Optimization, Schulman et al., 2015) writes policy optimization as constrained optimization: Importance sampling weightin objective allows updating new policy with old policy data (off-policy).

TRPO solves this constrained optimization with conjugate gradient method, theoretically guaranteeing monotonic improvement (new policy not worse than old). But implementation is complex, computationally expensive.

PPO: Simplified Trust Region

PPO (Proximal Policy Optimization, Schulman et al., 2017) simplifies TRPO, replacing KL constraint with clipping:where: -is importance sampling weight -clipstorange (like)

Intuition: - If(good action), want to increase, i.e., increase. But clipping limits, preventing too fast growth. - If(bad action), want to decrease, i.e., decrease. But clipping limits, preventing too fast decrease.

PPO advantages: - Simple implementation, just add clipping to loss - No need to compute KL divergence or Hessian matrix - Performance close to TRPO but faster

PPO has become the most commonly used policy gradient algorithm in industry, widely used by OpenAI, DeepMind, etc.

Complete PPO Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
import gym
import numpy as np

class PPOActorCritic(nn.Module):
def __init__(self, state_dim, action_dim, hidden_dim=64):
super(PPOActorCritic, self).__init__()
self.fc1 = nn.Linear(state_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.actor = nn.Linear(hidden_dim, action_dim)
self.critic = nn.Linear(hidden_dim, 1)

def forward(self, state):
x = F.tanh(self.fc1(state))
x = F.tanh(self.fc2(x))
action_probs = F.softmax(self.actor(x), dim=-1)
state_value = self.critic(x)
return action_probs, state_value

class PPO:
def __init__(self, state_dim, action_dim, lr=3e-4, gamma=0.99,
eps_clip=0.2, K_epochs=10):
self.gamma = gamma
self.eps_clip = eps_clip
self.K_epochs = K_epochs

self.policy = PPOActorCritic(state_dim, action_dim)
self.optimizer = optim.Adam(self.policy.parameters(), lr=lr)
self.policy_old = PPOActorCritic(state_dim, action_dim)
self.policy_old.load_state_dict(self.policy.state_dict())

self.MseLoss = nn.MSELoss()

def select_action(self, state):
with torch.no_grad():
state = torch.FloatTensor(state).unsqueeze(0)
action_probs, _ = self.policy_old(state)
dist = Categorical(action_probs)
action = dist.sample()
action_logprob = dist.log_prob(action)

return action.item(), action_logprob.item()

def update(self, memory):
# Monte Carlo estimate returns
rewards = []
discounted_reward = 0
for reward, is_terminal in zip(reversed(memory.rewards), reversed(memory.is_terminals)):
if is_terminal:
discounted_reward = 0
discounted_reward = reward + (self.gamma * discounted_reward)
rewards.insert(0, discounted_reward)

# Normalize
rewards = torch.tensor(rewards, dtype=torch.float32)
rewards = (rewards - rewards.mean()) / (rewards.std() + 1e-7)

# Convert to tensors
old_states = torch.FloatTensor(memory.states)
old_actions = torch.LongTensor(memory.actions)
old_logprobs = torch.FloatTensor(memory.logprobs)

# Optimize K epochs
for _ in range(self.K_epochs):
# Evaluate actions and states
action_probs, state_values = self.policy(old_states)
dist = Categorical(action_probs)
action_logprobs = dist.log_prob(old_actions)
dist_entropy = dist.entropy()

state_values = state_values.squeeze()

# Importance sampling weights
ratios = torch.exp(action_logprobs - old_logprobs.detach())

# Advantages
advantages = rewards - state_values.detach()

# PPO loss
surr1 = ratios * advantages
surr2 = torch.clamp(ratios, 1-self.eps_clip, 1+self.eps_clip) * advantages

loss = -torch.min(surr1, surr2) + 0.5*self.MseLoss(state_values, rewards) - 0.01*dist_entropy

# Update
self.optimizer.zero_grad()
loss.mean().backward()
self.optimizer.step()

# Copy new weights to old policy
self.policy_old.load_state_dict(self.policy.state_dict())

class Memory:
def __init__(self):
self.actions = []
self.states = []
self.logprobs = []
self.rewards = []
self.is_terminals = []

def clear_memory(self):
del self.actions[:]
del self.states[:]
del self.logprobs[:]
del self.rewards[:]
del self.is_terminals[:]

def train_ppo(env_name='CartPole-v1', max_episodes=1000,
update_timestep=200):
env = gym.make(env_name)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.n

ppo = PPO(state_dim, action_dim)
memory = Memory()

timestep = 0
episode_rewards = []

for episode in range(max_episodes):
state = env.reset()
episode_reward = 0

for t in range(500):
timestep += 1

# Select action
action, action_logprob = ppo.select_action(state)
next_state, reward, done, _ = env.step(action)

# Store
memory.states.append(state)
memory.actions.append(action)
memory.logprobs.append(action_logprob)
memory.rewards.append(reward)
memory.is_terminals.append(done)

state = next_state
episode_reward += reward

# Update
if timestep % update_timestep == 0:
ppo.update(memory)
memory.clear_memory()

if done:
break

episode_rewards.append(episode_reward)

if (episode + 1) % 50 == 0:
avg_reward = np.mean(episode_rewards[-50:])
print(f"Episode {episode+1}, Avg Reward: {avg_reward:.2f}")

return ppo, episode_rewards

# Train
ppo, rewards = train_ppo(max_episodes=1000)

PPO typically solves CartPole within 100-200 episodes with very smooth training curves (compared to REINFORCE).

Maximum Entropy Reinforcement Learning: SAC

Motivation for Entropy Regularization

Traditional RL goal is maximizing expected return. But this has a problem: policies may prematurely converge to local optima, lacking exploration.

One improvement idea is encouraging policy "diversity"— don't always select same action, maintain some randomness. Metric for randomness is entropy:Higher entropy means more random policy; zero entropy means completely deterministic.

Maximum entropy reinforcement learning objective:whereis temperature coefficient controlling entropy weight. This objective encourages policies to maximize return while maintaining exploration.

Benefits: - Automatic exploration: no need to manually design exploration strategies (like-greedy) - Robustness: smoother policies, insensitive to environment perturbations - Avoid local optima: entropy penalty prevents premature policy convergence

SAC: Soft Actor-Critic

SAC (Soft Actor-Critic, Haarnoja et al., 2018) is off-policy algorithm under maximum entropy framework, combining: - Actor-Critic architecture - Experience replay and target networks (like DDPG) - Entropy regularization

Core ideas:

  1. Soft Q-function: Q-value includes entropy bonus

  2. Policy update: maximize Q-value + entropy

  3. Automatic temperature tuning:not fixed, automatically adjusted based on target entropy

Pseudocode (simplified):


Algorithm: SAC

  1. Initialize Actor, two Critics, temperature$Q_{'1}, Q{'_2}$

  2. for stepdo 4.Sample action, execute and store$(s_t, a_t, r_t, s_{t+1})(s_i, a_i, r_i, s'i)a'i (|s'i)Q = i (y_i - Q{j}(s_i, a_i))^2= {s , a } [(a|s) - Q{_1}(s,a)]= -(_(a_t|s_t) + {H}){H}$Soft update target networks

  3. end for


SAC performs excellently on continuous control tasks, balancing sample efficiency (off-policy) and stability (maximum entropy), widely applied to robot control.

Code Framework

Complete SAC implementation is long (about 500 lines), here's core part:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
class SAC:
def __init__(self, state_dim, action_dim, max_action):
self.actor = GaussianPolicy(state_dim, action_dim, max_action)
self.critic_1 = QNetwork(state_dim, action_dim)
self.critic_2 = QNetwork(state_dim, action_dim)
self.critic_1_target = QNetwork(state_dim, action_dim)
self.critic_2_target = QNetwork(state_dim, action_dim)

# Automatic temperature tuning
self.target_entropy = -action_dim
self.log_alpha = torch.zeros(1, requires_grad=True)
self.alpha = self.log_alpha.exp()

self.actor_optimizer = optim.Adam(self.actor.parameters(), lr=3e-4)
self.critic_optimizer = optim.Adam(
list(self.critic_1.parameters()) + list(self.critic_2.parameters()),
lr=3e-4
)
self.alpha_optimizer = optim.Adam([self.log_alpha], lr=3e-4)

def select_action(self, state, evaluate=False):
state = torch.FloatTensor(state).unsqueeze(0)
if evaluate:
_, _, action = self.actor.sample(state)
else:
action, _, _ = self.actor.sample(state)
return action.detach().cpu().numpy()[0]

def train(self, replay_buffer, batch_size=256):
state, action, reward, next_state, done = replay_buffer.sample(batch_size)

with torch.no_grad():
# Sample next action
next_action, next_log_prob, _ = self.actor.sample(next_state)

# Compute target Q-value (including entropy)
target_Q1 = self.critic_1_target(next_state, next_action)
target_Q2 = self.critic_2_target(next_state, next_action)
target_Q = torch.min(target_Q1, target_Q2) - self.alpha * next_log_prob
target_Q = reward + (1 - done) * 0.99 * target_Q

# Update Critic
current_Q1 = self.critic_1(state, action)
current_Q2 = self.critic_2(state, action)
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)

self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()

# Update Actor
new_action, log_prob, _ = self.actor.sample(state)
Q1_new = self.critic_1(state, new_action)
Q2_new = self.critic_2(state, new_action)
Q_new = torch.min(Q1_new, Q2_new)

actor_loss = (self.alpha * log_prob - Q_new).mean()

self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()

# Update temperature parameter
alpha_loss = -(self.log_alpha * (log_prob + self.target_entropy).detach()).mean()

self.alpha_optimizer.zero_grad()
alpha_loss.backward()
self.alpha_optimizer.step()

self.alpha = self.log_alpha.exp()

# Soft update target networks
for param, target_param in zip(self.critic_1.parameters(), self.critic_1_target.parameters()):
target_param.data.copy_(0.005 * param.data + 0.995 * target_param.data)

for param, target_param in zip(self.critic_2.parameters(), self.critic_2_target.parameters()):
target_param.data.copy_(0.005 * param.data + 0.995 * target_param.data)

SAC typically outperforms TD3 and PPO on MuJoCo tasks, currently one of the top algorithms for continuous control.

Algorithm Comparison and Selection Guide

Main Algorithm Comparison

Algorithm Type Action Space Sample Efficiency Stability Implementation Suitable Scenarios
REINFORCE On-policy Discrete/Continuous Low Medium Easy Simple tasks, teaching
A2C/A3C On-policy Discrete/Continuous Medium Medium Medium Atari, parallel environments
PPO On-policy Discrete/Continuous Medium High Medium General, industry favorite
TRPO On-policy Discrete/Continuous Medium High Hard Theoretical research
DDPG Off-policy Continuous High Low Medium Continuous control, superseded by TD3
TD3 Off-policy Continuous High Medium Medium Continuous control, sample efficiency important
SAC Off-policy Continuous High High Hard Continuous control, pursuing performance

Selection Suggestions

1. By Action Space: - Discrete actions: PPO (preferred), A2C, DQN - Continuous actions: SAC (preferred), TD3, PPO

2. By Sample Budget: - Expensive samples (like real robots): SAC, TD3 (off-policy, high sample efficiency) - Abundant samples (like simulators): PPO (more stable)

3. By Implementation Resources: - Fast prototyping: PPO (OpenAI Baselines, Stable Baselines have ready implementations) - From scratch: REINFORCE or A2C (simple code)

4. By Task Characteristics: - Sparse rewards: SAC (entropy encourages exploration) - Dense rewards: PPO or TD3 - Partial observability: LSTM + A2C or PPO - Multi-agent: MADDPG (multi-agent version of TD3)

5. Industrial Applications: - OpenAI, DeepMind: PPO for large-scale training (like Dota 2, StarCraft II) - Robot control: SAC (recommended by Berkeley RL lab) - Autonomous driving: SAC or TD3

Deep Q&A

Q1: Why can Policy Gradient handle continuous actions while DQN cannot?

A: Fundamental difference is policy representation. DQN learns Q-function, implicitly obtaining policy through. In discrete spaces (like 18 actions), enumerating allto computeand taking maximum is simple. But in continuous space(like robot with 7 joint angles), need to solve:This is continuous optimization problem with no analytical solution, requiring iterative algorithms (like gradient ascent). Running optimizer each action selection is computationally expensive and imprecise.

Policy Gradient methods directly parameterize policy: - Discrete actions: output Softmax distribution, sample or take argmax - Continuous actions: output Gaussian mean and variance, sampleOne forward pass yields action, no optimization needed, naturally supports continuous spaces.

Q2: Why does REINFORCE have high variance? How to reduce it?

A: REINFORCE uses complete returnto estimate Q-value, variance sources: 1. Trajectory randomness: different trajectories'may vary greatly 2. Long-term accumulation: errors accumulate over time, largermeans higher variance

Methods to reduce variance:

Method 1: Baseline Subtract state value, keeping only advantage.is "average" return of that state; subtracting eliminates state's inherent quality, focusing only on action's relative merit. Experiments show variance can reduce 50%-90%.

Method 2: Critic (Actor-Critic) Replace Monte Carlo estimate with function approximationor, introducing bias but greatly reducing variance. TD methoddepends only on one-step transition, variance much smaller than.

Method 3: Multi-step Return Use n-step returnto trade off bias and variance.is TD (low variance high bias),is Monte Carlo (high variance low bias). In practicetoworks well.

Method 4: GAE (Generalized Advantage Estimation) Exponentially weighted multi-step TD errors:$

$controls bias-variance tradeoff. PPO commonly uses.

Q3: How does PPO's clipping mechanism work?

A: PPO's objective:whereis probability ratio between new and old policies.

Case 1:(good action, want to increase probability) - If: take first termof, normally increase - If: take second term, gradient is 0 (clip is constant), stop increasing

Intuition: good action probability can't grow too fast, at most totimes old policy (like 1.2x).

Case 2:(bad action, want to decrease probability) - If: take first term(note, so decreasing) - If: take second term, gradient is 0, stop decreasing

Intuition: bad action probability can't decrease too fast, at least totimes old policy (like 0.8x).

This mechanism ensures new policy's KL divergence from old won't be too large, avoiding "one step too far" causing performance collapse.

Q4: What are core differences between DDPG and TD3?

A: TD3 adds three tricks on top of DDPG to solve Q-value overestimation:

1. Clipped Double Q-Learning - DDPG: single Critic, target - TD3: two Critics, targetTaking minimum suppresses overestimation. Reason: probability of two independent Q-networks simultaneously overestimating same action is lower, minimum is conservative.

2. Delayed Policy Updates - DDPG: Actor and Critic update synchronously, every step updates both - TD3: Critic updates 2 times before Actor updates 1 time

Reason: Critic needs to first converge to accurate Q-value before Actor can optimize based on accurate gradient. If updated synchronously, Actor might exploit Critic's errors learning wrong policy.

3. Target Policy Smoothing - DDPG: target action - TD3:Adding noise smooths Q-function, avoiding drastic target fluctuation due to single action's Q-value anomaly.

Experiments show all three tricks are important; combined TD3 comprehensively surpasses DDPG on MuJoCo, becoming new off-policy continuous control baseline.

Q5: How does SAC's automatic temperature tuning work?

A: SAC's objective:Temperaturecontrols entropy weight: largemeans more random policy (exploration); smallmeans more deterministic (exploitation).

Early SAC manually set, but optimalvaries greatly across tasks. Later work (Haarnoja et al., 2018) proposed automatic tuning:

Target Entropy Constraint: set target entropy(like, negative action dimension), requiring:meaning policy's average entropy not below target.

Dual Optimization: introduce Lagrange multiplier, optimization becomes:Taking derivative with respect to: - If current policy entropy(too random), increasepenalizing entropy, forcing policy more deterministic - If(too deterministic), decreaseencouraging exploration

In implementation,ensures non-negativity, optimize:Note, sois negative entropy.

This wayautomatically adjusts to make policy entropy approach target, no manual tuning needed.

A: TRPO is theoretically more rigorous (monotonic improvement guarantee), but PPO more popular in practice, reasons:

1. Simple Implementation - TRPO: needs computing Hessian matrix inverse or conjugate gradient method, involves second-order optimization, complex code (about 1000 lines) - PPO: just modify loss function adding clip, first-order optimization (Adam), simple code (about 200 lines)

2. Computational Efficiency - TRPO: conjugate gradient method requires multiple Hessian-vector multiplications per iteration, computationally expensive - PPO: standard gradient descent, 2-3x faster

3. Comparable Performance Experiments show PPO's performance close to or surpassing TRPO on most tasks. PPO's clip mechanism though heuristic, very effective in practice.

4. Hyperparameter Robustness - TRPO: sensitive to KL constraintchoice, requires careful tuning - PPO: clip rangeworks on most tasks, good robustness

5. Easy Extension PPO's clip mechanism easily combines with other techniques (like GAE, reward shaping, curriculum learning), while TRPO's constrained optimization framework less flexible.

OpenAI used PPO for training Dota 2 AI and ChatGPT's RLHF phase, validating its large-scale application feasibility.

Q7: On-policy vs off-policy, what are pros and cons?

A:

On-policy (REINFORCE, A2C, PPO, TRPO):

Pros: - Good stability: training data matches current policy, consistent distribution, more accurate gradient estimates - Simple theory: directly optimizes expected return, no importance sampling correction needed - Easy implementation: doesn't need experience replay and other complex mechanisms

Cons: - Low sample efficiency: each experience used only once (generated by current policy), then discarded - Hard to parallelize: though can sample multi-environment (like A3C), data must come from current policy - Insufficient exploration: relies on policy's own randomness, may prematurely converge

Off-policy (DQN, DDPG, TD3, SAC):

Pros: - High sample efficiency: experience replay allows reusing data multiple times, each experience used dozens of times - Flexible exploration: can use any exploration policy (like-greedy, OU noise) to collect data - Supports human data: can learn from demonstrations or historical data (imitation learning)

Cons: - Training instability: data distribution doesn't match target policy, needs importance sampling correction or target network stabilization - Complex theory: involves off-policy correction, distributional shift issues - May diverge: function approximation + off-policy + bootstrapping (deadly triad) easily fails to converge

Selection Advice: - Expensive samples (like real robots): off-policy (SAC, TD3) - Abundant samples (like simulators, Atari): on-policy (PPO) - Need stable training: on-policy - Need learning from demonstrations: off-policy

Q8: How do Actor and Critic in Actor-Critic mutually promote?

A: Actor-Critic is iterative improvement process:

Critic → Actor (critic guides actor):

Critic learns value functionor, evaluating "how good is state" or "how good is action". Actor optimizes policy based on this evaluation: - If Critic says, Actor increases, decreases - Policy gradient formulaprecisely embodies this

This way Actor isn't blindly trial-and-error, but improves directionally (toward Q-value increase).

Actor → Critic (actor helps critic):

Critic needs data to learn value function. Actor generates new trajectories, providing training data: - On-policy: Critic uses current Actor-generated data, TD updates - Off-policy: Critic uses data from experience replay, covering broader state space

More importantly, Actor's improvement enables policy to access higher-value states, Critic can learn more accurate value function (more positive samples).

Positive Feedback Loop:

Good Critic → Actor improves faster → access better states → Critic learns more accurately → Actor further improves

This loop eventually converges to optimal policy and optimal value function (theoretically; in practice may converge to local optimum).

Key is balancing their learning rates: - If Actor learns too fast, Critic can't keep up, provides wrong value estimates, Actor learns incorrectly - If Critic learns too fast, Actor updates too slowly, wastes accurate value information

Typically set Actor learning rate < Critic learning rate (like), letting Critic learn first.

Q9: How to choose discount factor?

A: Discount factorcontrols emphasis on future rewards: -: only care about immediate reward (myopic) -: future rewards equal weight to immediate (farsighted)

Theoretical Considerations:

  1. Task Duration:

    • Short-term tasks (like CartPole, tens of steps to end):to - Long-term tasks (like some Atari games, thousands of steps):or - Infinite horizon tasks:mustensuring bounded returns
  2. Effective Horizon: discount factor determines "effective planning length" -: effective lengthsteps -: effective lengthsteps -: effective length 1000 steps

    If task needs 100-step planning but, agent can at most look 10 steps ahead, can't learn long-term strategy.

  3. Variance and Bias:

    • Large: high return variance (accumulates more randomness), but low bias (accurately reflects long-term value)
    • Small: low variance but myopic (underestimates long-term value)

Practical Advice:

  • Start with(common default)
  • If training unstable (high variance), appropriately reduce (like 0.95)
  • If task obviously needs long-term planning but agent can't learn, increase(like 0.995 or 0.999)
  • Some tasks can use varying(curriculum learning): initially smallquickly learns short-term strategy, later increaselearns long-term planning

Examples: - CartPole:(short task, but 0.99 sufficient) - Atari Pong:(hundreds of steps per game, need to predict ball trajectory) - Go:(or very close to 1, like 0.9999), because need global planning

Q10: How to debug policy gradient algorithms?

A: Policy gradient algorithms are harder to debug than supervised learning, because reward signals are sparse and delayed. Systematic debugging process:

1. Check Environment and Data - Random policy average return: if better than agent, agent has problems - Manual policy return: what can human experts achieve? Where's upper bound? - Reward distribution: check for outliers (like sudden +1000), may cause training instability

2. Simplify Problem to Verify Code - Test on simple environment (like CartPole): should solve within 100-200 episodes - If simple environment doesn't work, code has bug (check gradient computation, advantage estimation, etc.)

3. Monitor Key Metrics - Policy entropy: should gradually decrease (policy from random to deterministic), but shouldn't drop to 0 too fast (premature convergence) - Advantage mean and variance: mean should approach 0 (baseline effective), variance should gradually reduce - Value function error:should gradually reduce; if stays large long-term, Critic didn't learn well - KL divergence (PPO/TRPO):per update should be in target range (like 0.01-0.05)

4. Visualize Policy Behavior - Render several episodes, see what agent is doing: random wandering? Or clear strategy? - Check action distribution: are some actions never selected? (possible network initialization issue)

5. Check Hyperparameters - Learning rate too large: training curve oscillates drastically - Learning rate too small: converges extremely slowly - Batch size too small: high gradient estimate variance, unstable -too small: myopic, can't learn long-term strategy

6. Common Bugs - Forgot to detach target: value function's targetor TD target shouldn't have gradient - Advantage not normalized:'s scale affects learning, best to normalize - Reward not clipped: some environments have vastly different reward scales, need normalize or clip - Insufficient exploration: deterministic policy didn't add noise, or noise too small - Gradient explosion: need gradient clipping

7. Compare Baseline - Run same task with mature libraries like Stable Baselines3, compare performance - If library results much better, own implementation has problems - If library also doesn't work, may be task too hard or hyperparameters need special tuning

8. Gradually Increase Complexity - First use simplest REINFORCE to verify environment and data flow - Then add baseline, check if variance reduces - Then upgrade to A2C, check if Critic is effective - Finally upgrade to PPO/SAC etc., enjoy performance improvement

Debugging RL requires patience and systematic methods, recommend using tensorboard and other tools to record all metrics for easy comparison and backtracking.

References

Core papers in Policy Gradient and Actor-Critic:

  1. Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4), 229-256.
    Paper Link
    REINFORCE algorithm, pioneering work in policy gradient methods

  2. Sutton, R. S., McAllester, D. A., Singh, S. P., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. NIPS.
    Paper Link
    Rigorous proof of policy gradient theorem

  3. Silver, D., Lever, G., Heess, N., et al. (2014). Deterministic policy gradient algorithms. ICML.
    Paper Link
    Deterministic policy gradient, theoretical foundation of DDPG

  4. Mnih, V., Badia, A. P., Mirza, M., et al. (2016). Asynchronous methods for deep reinforcement learning. ICML.
    arXiv:1602.01783
    A3C algorithm, first on-policy method to compete with DQN on Atari

  5. Schulman, J., Levine, S., Abbeel, P., et al. (2015). Trust region policy optimization. ICML.
    arXiv:1502.05477
    TRPO, introducing trust region constraint guaranteeing monotonic improvement

  6. Lillicrap, T. P., Hunt, J. J., Pritzel, A., et al. (2016). Continuous control with deep reinforcement learning. ICLR.
    arXiv:1509.02971
    DDPG, extending DQN to continuous action spaces

  7. Schulman, J., Wolski, F., Dhariwal, P., et al. (2017). Proximal policy optimization algorithms.
    arXiv:1707.06347
    PPO, most commonly used policy gradient algorithm in industry

  8. Fujimoto, S., van Hoof, H., & Meger, D. (2018). Addressing function approximation error in actor-critic methods. ICML.
    arXiv:1802.09477
    TD3, solving DDPG's Q-value overestimation problem

  9. Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. ICML.
    arXiv:1801.01290
    SAC, off-policy algorithm under maximum entropy framework

  10. Haarnoja, T., Zhou, A., Hartikainen, K., et al. (2018). Soft actor-critic algorithms and applications.
    arXiv:1812.05905
    SAC applications and automatic temperature tuning

  11. Schulman, J., Moritz, P., Levine, S., et al. (2016). High-dimensional continuous control using generalized advantage estimation. ICLR.
    arXiv:1506.02438
    GAE, important technique for reducing policy gradient variance


From REINFORCE's Monte Carlo policy gradient to Actor-Critic's TD methods, from A3C's asynchronous parallelization to PPO's clipping tricks, from DDPG's deterministic policy to SAC's maximum entropy framework — policy gradient methods have developed a rich technical stack over the past thirty years. These algorithms not only broke through DQN's discrete action limitation, shining in continuous control tasks, but also provided diverse solutions for exploration-exploitation balance, sample efficiency, and training stability. PPO has become industry's first choice with its simplicity and robustness, while SAC and TD3 dominate in robot control where performance is paramount.

However, model-free methods' sample efficiency remains a bottleneck — even the most advanced SAC requires millions of interactions on complex tasks. The next chapter will explore model-based methods: by learning environment models and planning within models, dramatically reducing real environment interactions, leading us into the world of algorithms like Dyna, MuZero, and Dreamer.

  • Post title:Reinforcement Learning (3): Policy Gradient and Actor-Critic Methods
  • Post author:Chen Kai
  • Create time:2024-08-16 10:45:00
  • Post link:https://www.chenk.top/reinforcement-learning-3-policy-gradient-and-actor-critic/
  • Copyright Notice:All articles in this blog are licensed under BY-NC-SA unless stating additionally.
 Comments