PRIME: Scaffolding Manipulation Tasks with Behavior Primitives for Data-Efficient Imitation Learning
Abstract: Imitation learning has shown great potential for enabling robots to acquire complex manipulation behaviors. However, these algorithms suffer from high sample complexity in long-horizon tasks, where compounding errors accumulate over the task horizons. We present PRIME (PRimitive-based IMitation with data Efficiency), a behavior primitive-based framework designed for improving the data efficiency of imitation learning. PRIME scaffolds robot tasks by decomposing task demonstrations into primitive sequences, followed by learning a high-level control policy to sequence primitives through imitation learning. Our experiments demonstrate that PRIME achieves a significant performance improvement in multi-stage manipulation tasks, with 10-34% higher success rates in simulation over state-of-the-art baselines and 20-48% on physical hardware.
- “OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning” In ICLR, 2021
- “Data-driven grasp synthesis—a survey” In IEEE Transactions on robotics 30.2 IEEE, 2013, pp. 289–309
- David Brandfonbrener, Ofir Nachum and Joan Bruna “Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation” In arXiv preprint arXiv:2305.16985, 2023
- “RT-1: Robotics Transformer for Real-World Control at Scale” In Robotics: Science and Systems (RSS), 2022
- “Predicting Object Interactions with Behavior Primitives: An Application in Stowing Tasks” In Conference on Robot Learning, 2023, pp. 358–373 PMLR
- “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion” In Robotics: Science and Systems (RSS), 2023
- “Efficient bimanual manipulation using learned task schemas” In 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 1149–1155 IEEE
- Murtaza Dalal, Deepak Pathak and Russ R Salakhutdinov “Accelerating robotic reinforcement learning via parameterized action primitives” In Advances in Neural Information Processing Systems 34, 2021, pp. 21847–21859
- “Learning universal policies via text-guided video generation” In Advances in Neural Information Processing Systems 36, 2024
- “Integrated task and motion planning” In Annual review of control, robotics, and autonomous systems 4 Annual Reviews, 2021, pp. 265–293
- “Deep reinforcement learning in parameterized action space” In arXiv preprint arXiv:1511.04143, 2015
- “Neural task graphs: Generalizing to unseen tasks from a single video demonstration” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 8565–8574
- “Imitation learning: A survey of learning methods” In ACM Computing Surveys (CSUR) 50.2 ACM New York, NY, USA, 2017, pp. 1–35
- “Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors” In Neural Computation, 2013
- “Robot learning from demonstration by constructing skill trees” In The International Journal of Robotics Research 31.3 SAGE Publications Sage UK: London, England, 2012, pp. 360–375
- “Ddco: Discovery of deep continuous options for robot learning from demonstrations” In Conference on robot learning, 2017, pp. 418–437 PMLR
- “Pre-training for robots: Offline rl enables learning new tasks from a handful of trials” In arXiv preprint arXiv:2210.05178, 2022
- “Learning to combine primitive skills: A step towards versatile robotic manipulation §”
- Youngwoon Lee, Jingyun Yang and Joseph J Lim “Learning to coordinate manipulation skills via skill behavior diversification” In International conference on learning representations, 2019
- Tomás Lozano-Pérez and Leslie Pack Kaelbling “A constraint-based method for solving sequential manipulation planning problems” In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 3684–3691 IEEE
- “Multi-Stage Cable Routing through Hierarchical Imitation Learning” In arXiv preprint arXiv:2307.08927, 2023
- “Learning latent plans from play” In Conference on robot learning, 2020, pp. 1113–1132 PMLR
- “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics” In RSS, 2017
- Parsa Mahmoudieh, Trevor Darrell and Deepak Pathak “Weakly-Supervised Trajectory Segmentation for Learning Reusable Skills” In ICLR 2020 Workshop on Bridging AI and Cognitive Science, 2020
- “Learning to generalize across long-horizon tasks from human demonstrations” In arXiv preprint arXiv:2003.06085, 2020
- “Iris: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data” In 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 4414–4420 IEEE
- “What matters in learning from offline human demonstrations for robot manipulation” In arXiv preprint arXiv:2108.03298, 2021
- Warwick Masson, Pravesh Ranchod and George Konidaris “Reinforcement learning with parameterized actions” In Proceedings of the AAAI Conference on Artificial Intelligence 30.1, 2016
- “Calvin: A benchmark for language-conditioned policy learning for long-horizon robot manipulation tasks” In IEEE Robotics and Automation Letters 7.3 IEEE, 2022, pp. 7327–7334
- “Learning and Retrieval from Prior Data for Skill-based Imitation Learning” In arXiv preprint arXiv:2210.11435, 2022
- Soroush Nasiriany, Huihan Liu and Yuke Zhu “Augmenting reinforcement learning with behavior primitives for diverse manipulation tasks” In 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 7477–7484 IEEE
- “Learning modular policies for robotics” In Frontiers in computational neuroscience 8 Frontiers Media SA, 2014, pp. 62
- Keiran Paster, Sheila A McIlraith and Jimmy Ba “Planning from pixels using inverse dynamics models” In arXiv preprint arXiv:2012.02419, 2020
- “Ridm: Reinforced inverse dynamics modeling for learning from a single observed demonstration” In IEEE Robotics and Automation Letters 5.4 IEEE, 2020, pp. 6262–6269
- Karl Pertsch, Youngwoon Lee and Joseph Lim “Accelerating reinforcement learning with learned skill priors” In Conference on robot learning, 2021, pp. 188–204 PMLR
- “Guided reinforcement learning with learned skills” In arXiv preprint arXiv:2107.10253, 2021
- “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours” In 2016 IEEE international conference on robotics and automation (ICRA), 2016, pp. 3406–3413 IEEE
- Doina Precup “Temporal abstraction in reinforcement learning” University of Massachusetts Amherst, 2000
- “Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations” In Proceedings of Robotics: Science and Systems (RSS), 2018
- “Recent advances in robot learning from demonstration” In Annual review of control, robotics, and autonomous systems 3 Annual Reviews, 2020, pp. 297–330
- “Learning robot skills with temporal variational inference” In International Conference on Machine Learning, 2020, pp. 8624–8633 PMLR
- “Discovering motor programs by recomposing demonstrations” In International Conference on Learning Representations, 2019
- “Discovering motor programs by recomposing demonstrations” In International Conference on Learning Representations, 2020
- “Waypoint-Based Imitation Learning for Robotic Manipulation” In arXiv preprint arXiv:2307.14326, 2023
- “Taco: Learning task decomposition via temporal alignment for control” In International Conference on Machine Learning, 2018, pp. 4654–4663 PMLR
- “Parrot: Data-driven behavioral priors for reinforcement learning” In arXiv preprint arXiv:2011.10024, 2020
- Marc Toussaint “Logic-Geometric Programming: An Optimization-Based Approach to Combined Task and Motion Planning.” In IJCAI, 2015, pp. 1930–1936
- “Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects” In CoRL, 2018
- “Mimicplay: Long-horizon imitation learning by watching human play” In arXiv preprint arXiv:2302.12422, 2023
- “How to Leverage Unlabeled Data in Offline Reinforcement Learning” In International Conference on Machine Learning, 2022
- “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation” In 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 5628–5635 IEEE
- “Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware” In Robotics: Science and Systems (RSS), 2023
- “Semi-supervised offline reinforcement learning with action-free trajectories” In International conference on machine learning, 2023, pp. 42339–42362 PMLR
- “VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors” In arXiv preprint arXiv:2210.11339, 2022
- Yifeng Zhu, Peter Stone and Yuke Zhu “Bottom-up skill discovery from unsegmented demonstrations for long-horizon robot manipulation” In IEEE Robotics and Automation Letters 7.2 IEEE, 2022, pp. 4126–4133
- “robosuite: A modular simulation framework and benchmark for robot learning” In arXiv preprint arXiv:2009.12293, 2020
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.