Maximizing Representation-Based Transfer in RL Fine-Tuning
Develop methods that maximize transfer derived specifically from reused pretrained feature representations during fine-tuning of reinforcement learning agents, including scenarios where policy heads are re-initialized, to achieve substantial learning speedups and performance gains from representation reuse alone.
References
Maximizing transfer from the representation remains an interesting open question.
— Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem
(2402.02868 - Wołczyk et al., 2024) in Appendix, Section "Analysis of forgetting in robotic manipulation tasks", subsection "Impact of representation vs policy on transfer"