Online Prototype Alignment for Few-shot Policy Transfer
Abstract: Domain adaptation in reinforcement learning (RL) mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve the few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.
- The im algorithm: a variational approach to information maximization. In NeurIPS, 2003.
- Cross-modal domain adaptation for cost-efficient visual reinforcement learning. In NeurIPS, 2021.
- On the properties of neural machine translation: Encoder–decoder approaches. In SSST@EMNLP, 2014.
- Quantifying generalization in reinforcement learning. In ICML, 2019.
- An introduction to deep reinforcement learning. Found. Trends Mach. Learn., 2018.
- Transfer learning for related reinforcement learning tasks via image-to-image translation. In ICML, 2018a.
- Transfer learning for related reinforcement learning tasks via image-to-image translation. In ICML, 2018b.
- Hafner, D. Benchmarking the spectrum of agent capabilities. In ICLR, 2022.
- Darla: Improving zero-shot transfer in reinforcement learning. In ICML, 2017.
- Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. CVPR, 2019.
- SCALOR: generative world models with scalable object representations. In ICLR, 2020.
- Domain adversarial reinforcement learning. ArXiv, 2021.
- Continuous control with deep reinforcement learning. CoRR, 2015.
- SPACE: unsupervised object-oriented scene representation via spatial attention and decomposition. In ICLR, 2020.
- Unsupervised image-to-image translation networks. ArXiv, 2017.
- Playing atari with deep reinforcement learning. ArXiv, 2013.
- Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 2011.
- Conceptual reinforcement learning for language-conditioned tasks. ArXiv, 2023.
- Cad2rl: Real single-image flight without a single real image. RSS, 2017.
- Proximal policy optimization algorithms. ArXiv, 2017.
- Transfer RL across observation feature spaces via model-based regularization. In ICLR, 2022.
- Domain randomization for transferring deep neural networks from simulation to the real world. IROS, 2017.
- Adapting deep visuomotor representations with weak pairwise constraints. In Workshop on the Algorithmic Foundations of Robotics, 2015.
- Tianshou: A highly modularized deep reinforcement learning library. ArXiv, 2021.
- Domain adaptation in reinforcement learning via latent unified state representation. ArXiv, 2021.
- Object-category aware reinforcement learning. CoRR, 2022.
- Virtual to real reinforcement learning for autonomous driving. ArXiv, 2017.
- Deep reinforcement learning with relational inductive biases. In ICLR, 2019.
- Vr-goggles for robots: Real-to-sim domain adaptation for visual control. IEEE Robotics and Automation Letters, 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.