Demonstration-Guided Continual RL
- The paper demonstrates that externalizing prior knowledge into a self-evolving demonstration repository enables state-of-the-art forward transfer and minimizes forgetting in continual RL tasks.
- It employs a curriculum-based exploration strategy that seamlessly shifts from demonstration guidance to autonomous exploration for rapid adaptation across dynamic tasks.
- Experimental results on navigation and locomotion benchmarks confirm that DGCRL outperforms traditional methods in Average Performance and Forgetting metrics.
Demonstration-Guided Continual Reinforcement Learning (DGCRL) encompasses a class of algorithms designed to address the stability–plasticity dilemma in continual reinforcement learning (CRL). In dynamic, non-stationary environments, RL agents must learn over a sequence of tasks without catastrophic forgetting or retraining from scratch, while adapting rapidly to novel task conditions. DGCRL externalizes prior knowledge not as parameter regularization or replay buffers, but as a self-evolving demonstration repository that directly influences agent exploration policy at the behavioral level. This integration of demonstration-guided exploration and curriculum scheduling offers state-of-the-art forward transfer, stability, and knowledge reuse in dynamic continual RL benchmarks (Yang et al., 21 Dec 2025). Demonstration-guided approaches also extend to the reward inference setting, as in lifelong inverse reinforcement learning (Lifelong IRL) (Mendez et al., 2022), further highlighting the generality of expert-trajectory-driven transfer in CRL.
1. Formal Problem Setting and Mathematical Framework
DGCRL is formulated for a sequence of Markov decision processes (MDPs) that share state and action spaces but may differ in transition dynamics and reward . Every task , with discount factor . The agent’s objective is to optimize the average return:
Key desiderata include:
- Stability: Maintain performance on past tasks (minimize forgetting).
- Plasticity: Rapidly acquire new knowledge (maximize forward transfer).
DGCRL diverges from prior CRL approaches by externalizing prior knowledge into a demonstration repository (guide policy set ), yielding direct behavioral control during agent exploration.
2. Demonstration Repository Construction and Evolution
DGCRL’s core insight is to maintain a continually-updated set of guide policies, each representing a previously successful or expert trajectory. For a new task , the agent retrieves the demonstration yielding the highest expected return on , i.e.,
and records its performance threshold .
Self-evolution of the repository proceeds as follows: if the current policy (a mixture of demonstration-guided and exploratory behaviors) achieves return (with over training), then is added to and the threshold is updated. This mechanism ensures encodes increasingly performant and relevant behaviors as the agent’s experience grows (Yang et al., 21 Dec 2025).
3. Curriculum-Guided Exploration and Policy Scheduling
DGCRL implements a curriculum-based exploration mechanism that combines demonstration and agent policies within each episode of horizon . A guide length (initially , decremented by after threshold-exceeding rollouts) segments each episode:
- For , actions are sampled from the guide policy: .
- For , actions are sampled from the agent's own exploration policy: .
This phased control “jump-starts” the agent from promising state regions and then transitions to autonomous exploration. The horizon decreases as agent performance surpasses demonstration quality, scheduling a gradual shift from demonstration guidance to pure exploration. The update rule is:
Formally, the induced episode return is
No explicit imitation loss is required, since demonstration policies directly influence the visitation distribution (Yang et al., 21 Dec 2025).
4. DGCRL Algorithmic Implementation
DGCRL is realized atop off-policy actor-critic RL (TD3 in the reference implementation). The training protocol for each task :
- Select demonstration , set , initialize .
- While :
- Execute mixed policy for steps, gather transitions .
- Update actor (parameter ) and twin critics (parameters ) by minimizing, respectively,
- If : add to , decrement .
- Proceed to the next task.
Critical hyperparameters for TD3 include learning rates , or $0.95$, target update rate , and action noise in . Initial (60 for HalfCheetah); is task-dependent (Yang et al., 21 Dec 2025).
5. Experimental Results and Benchmarking
Empirical evaluation demonstrates DGCRL’s efficacy on both synthetic nonstationary-2D navigation (three variation modes: goal/reward, puddle/transition, both) and MuJoCo locomotion tasks (Hopper, HalfCheetah, Ant with target-velocity shifts). Each sequence contains tasks with episode horizon. Baselines comprise naive sequential RL, Robust Policy (domain randomization), Adaptive (LSTM), MAML, and LLIRL.
Quantitative comparisons, using metrics such as Average Performance (AP), Forward Transfer (FT), and Forgetting (F), show that DGCRL achieves superior performance and lower (sometimes negative) forgetting:
| Benchmark | AP (DGCRL) | AP (Baseline Range) | FT (DGCRL) | FT (Baseline Range) | Forgetting (DGCRL) | Forgetting (Baseline Range) |
|---|---|---|---|---|---|---|
| Navigation v1 | –6.7 | –43…–78 | 0.82 | –0.02…0.62 | –1.3 | 22…31 |
| Hopper | +93.8 | –3…–25 | 0.80 | –0.14…0.39 | –3.5 | –60…35 |
Learning curves exhibit a rapid jump-start (due to demonstration coverage), stable convergence, brief dips correlating with reductions in guide length , and swift recovery, confirming the value of curriculum scheduling (Yang et al., 21 Dec 2025).
6. Sensitivity, Ablations, and Theoretical Insights
Sensitivity analyses demonstrate that increasing initial repository size accelerates convergence and improves AP/FT, but marginally affects forgetting. Notably, DGCRL retains a performance lead even with minimal demonstrations (20% of full set).
Ablations confirm that (i) resetting both actor and critic parameters between tasks maximizes AP/FT, and (ii) so-called “pure replay” baselines (Initial Trajectory Replay, Evolving Trajectory Replay) are inferior to DGCRL, establishing that dynamic curriculum and self-evolution are crucial beyond mere replay of demonstrations.
A theoretical regret analysis (Appendix) indicates a sublinear dependency on the number of tasks , though direct comparisons to alternate CRL regret bounds are pending (Yang et al., 21 Dec 2025).
7. Relation to Lifelong Inverse Reinforcement Learning and Broader Context
Lifelong IRL (Mendez et al., 2022) extends DGCRL methodology to the reward inference regime. Instead of policy cloning, it employs maximum-entropy IRL with a hierarchical latent reward basis and sparse task coefficients , incrementally recovering reusable components across a sequence of demonstration-driven tasks. The online learning algorithm alternates between single-task reward inference and basis updating (via LASSO and ridge regression), supporting both forward and reverse transfer—i.e., improvement of earlier tasks as more tasks are processed. This approach provides an efficient, interpretable instantiation of demonstration-guided transfer, embodying the conceptual foundation of DGCRL in the inverse RL domain.
8. Limitations and Prospects
Current DGCRL variants are evaluated exclusively in simulated domains. Scalability of the demonstration repository may require advanced retrieval and pruning (e.g., clustering-based indexing) for real-world application. Conventional forgetting metrics can yield misleading negative values; the development of more robust continual RL evaluation protocols is needed. DGCRL directly shapes sampled state-action distributions but does not address scenarios with evolving observation modalities or online feature learning. The open challenge remains to extend theoretical analysis, repository management, and demonstration-guided control to broader, high-dimensional, or safety-critical settings (Yang et al., 21 Dec 2025).
References
- "Demonstration-Guided Continual Reinforcement Learning in Dynamic Environments" (Yang et al., 21 Dec 2025)
- "Lifelong Inverse Reinforcement Learning" (Mendez et al., 2022)