Papers
Topics
Authors
Recent
Search
2000 character limit reached

MMP++: Motion Manifold Primitives with Parametric Curve Models

Published 26 Oct 2023 in cs.AI, cs.LG, and cs.RO | (2310.17072v4)

Abstract: Motion Manifold Primitives (MMP), a manifold-based approach for encoding basic motion skills, can produce diverse trajectories, enabling the system to adapt to unseen constraints. Nonetheless, we argue that current MMP models lack crucial functionalities of movement primitives, such as temporal and via-points modulation, found in traditional approaches. This shortfall primarily stems from MMP's reliance on discrete-time trajectories. To overcome these limitations, we introduce Motion Manifold Primitives++ (MMP++), a new model that integrates the strengths of both MMP and traditional methods by incorporating parametric curve representations into the MMP framework. Furthermore, we identify a significant challenge with MMP++: performance degradation due to geometric distortions in the latent space, meaning that similar motions are not closely positioned. To address this, Isometric Motion Manifold Primitives++ (IMMP++) is proposed to ensure the latent space accurately preserves the manifold's geometry. Our experimental results across various applications, including 2-DoF planar motions, 7-DoF robot arm motions, and SE(3) trajectory planning, show that MMP++ and IMMP++ outperform existing methods in trajectory generation tasks, achieving substantial improvements in some cases. Moreover, they enable the modulation of latent coordinates and via-points, thereby allowing efficient online adaptation to dynamic environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and autonomous systems, vol. 57, no. 5, pp. 469–483, 2009.
  2. M. Saveriano, F. J. Abu-Dakka, A. Kramberger, and L. Peternel, “Dynamic movement primitives in robotics: A tutorial survey,” arXiv preprint arXiv:2102.03861, 2021.
  3. Z. Zhu and H. Hu, “Robot learning from demonstration in robotic assembly: A survey,” Robotics, vol. 7, no. 2, p. 17, 2018.
  4. G. Arvanitidis, L. K. Hansen, and S. Hauberg, “Latent space oddity: on the curvature of deep generative models,” arXiv preprint arXiv:1710.11379, 2017.
  5. Y. Lee, “A geometric perspective on autoencoders,” arXiv preprint arXiv:2309.08247, 2023.
  6. M. Noseworthy, R. Paul, S. Roy, D. Park, and N. Roy, “Task-conditioned variational autoencoders for learning movement primitives,” in Conference on robot learning.   PMLR, 2020, pp. 933–944.
  7. B. Lee, Y. Lee, S. Kim, M. Son, and F. C. Park, “Equivariant motion manifold primitives,” in 7th Annual Conference on Robot Learning, 2023.
  8. S. Schaal, P. Mohajerian, and A. Ijspeert, “Dynamics systems vs. optimal control—a unifying view,” Progress in brain research, vol. 165, pp. 425–445, 2007.
  9. A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal, “Dynamical movement primitives: learning attractor models for motor behaviors,” Neural computation, vol. 25, no. 2, pp. 328–373, 2013.
  10. A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann, “Probabilistic movement primitives,” Advances in neural information processing systems, vol. 26, 2013.
  11. Y. Zhou, J. Gao, and T. Asfour, “Learning via-point movement primitives with inter-and extrapolation capabilities,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 4301–4308.
  12. Y. Lee, S. Yoon, M. Son, and F. C. Park, “Regularized autoencoders for isometric representation learning,” in International Conference on Learning Representations, 2022.
  13. A. Pervez, A. Ali, J.-H. Ryu, and D. Lee, “Novel learning from demonstration approach for repetitive teleoperation tasks,” in 2017 IEEE World Haptics Conference (WHC).   IEEE, 2017, pp. 60–65.
  14. Y. Fanger, J. Umlauft, and S. Hirche, “Gaussian processes for dynamic movement primitives with application in knowledge-based cooperation,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 3913–3919.
  15. J. Umlauft, Y. Fanger, and S. Hirche, “Bayesian uncertainty modeling for programming by demonstration,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 6428–6434.
  16. A. Pervez, Y. Mao, and D. Lee, “Learning deep movement primitives using convolutional neural networks,” in 2017 IEEE-RAS 17th international conference on humanoid robotics (Humanoids).   IEEE, 2017, pp. 191–197.
  17. S. M. Khansari-Zadeh and A. Billard, “Learning stable nonlinear dynamical systems with gaussian mixture models,” IEEE Transactions on Robotics, vol. 27, no. 5, pp. 943–957, 2011.
  18. K. Neumann, A. Lemme, and J. J. Steil, “Neural learning of stable dynamical systems based on data-driven lyapunov candidates,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2013, pp. 1216–1222.
  19. K. Neumann and J. J. Steil, “Learning robot motions with stable dynamical systems under diffeomorphic transformations,” Robotics and Autonomous Systems, vol. 70, pp. 1–15, 2015.
  20. C. Blocher, M. Saveriano, and D. Lee, “Learning stable dynamical systems using contraction theory,” in 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).   IEEE, 2017, pp. 124–129.
  21. V. Sindhwani, S. Tu, and M. Khansari, “Learning contracting vector fields for stable imitation learning,” arXiv preprint arXiv:1804.04878, 2018.
  22. J. Z. Kolter and G. Manek, “Learning stable deep dynamics models,” Advances in neural information processing systems, vol. 32, 2019.
  23. Y. Huang, L. Rozo, J. Silvério, and D. G. Caldwell, “Kernelized movement primitives,” The International Journal of Robotics Research, vol. 38, no. 7, pp. 833–852, 2019.
  24. D.-H. Park, H. Hoffmann, P. Pastor, and S. Schaal, “Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields,” in Humanoids 2008-8th IEEE-RAS International Conference on Humanoid Robots.   IEEE, 2008, pp. 91–98.
  25. H. Hoffmann, P. Pastor, D.-H. Park, and S. Schaal, “Biologically-inspired dynamical systems for movement generation: Automatic real-time goal adaptation and obstacle avoidance,” in 2009 IEEE international conference on robotics and automation.   IEEE, 2009, pp. 2587–2592.
  26. S. M. Khansari-Zadeh and A. Billard, “A dynamical system approach to realtime obstacle avoidance,” Autonomous Robots, vol. 32, pp. 433–454, 2012.
  27. M. Ginesi, D. Meli, A. Calanca, D. Dall’Alba, N. Sansonetto, and P. Fiorini, “Dynamic movement primitives: Volumetric obstacle avoidance,” in 2019 19th international conference on advanced robotics (ICAR).   IEEE, 2019, pp. 234–239.
  28. H. Beik-Mohammadi, S. Hauberg, G. Arvanitidis, G. Neumann, and L. Rozo, “Learning riemannian manifolds for geodesic motion skills,” in Robotics: Science and Systems, 2021.
  29. Y. Lee, H. Kwon, and F. Park, “Neighborhood reconstructing autoencoders,” Advances in Neural Information Processing Systems, vol. 34, pp. 536–546, 2021.
  30. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  31. A. Creswell, Y. Mohamied, B. Sengupta, and A. A. Bharath, “Adversarial information factorization,” arXiv preprint arXiv:1711.05175, 2017.
  32. S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive auto-encoders: Explicit invariance during feature extraction,” in Proceedings of the 28th international conference on international conference on machine learning, 2011, pp. 833–840.
  33. Y. Lee and F. C. Park, “On explicit curvature regularization in deep generative models,” arXiv preprint arXiv:2309.10237, 2023.
  34. P. Nazari, S. Damrich, and F. A. Hamprecht, “Geometric autoencoders-what you see is what you decode,” 2023.
  35. S. Yoon, Y.-K. Noh, and F. Park, “Autoencoding under normalization constraints,” in International Conference on Machine Learning.   PMLR, 2021, pp. 12 087–12 097.
  36. C. Jang, Y. Lee, Y.-K. Noh, and F. C. Park, “Geometrically regularized autoencoders for non-euclidean data,” in The Eleventh International Conference on Learning Representations.
  37. Y. Lee, S. Kim, J. Choi, and F. Park, “A statistical manifold framework for point cloud data,” in International Conference on Machine Learning.   PMLR, 2022, pp. 12 378–12 402.
  38. H. Shao, A. Kumar, and P. Thomas Fletcher, “The riemannian geometry of deep generative models,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 315–323.
  39. C. Jang, Y.-K. Noh, and F. C. Park, “A riemannian geometric framework for manifold learning of non-euclidean data,” Advances in Data Analysis and Classification, vol. 15, no. 3, pp. 673–699, 2021.
Citations (2)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.