Papers
Topics
Authors
Recent
Search
2000 character limit reached

CAPGrasp: An $\mathbb{R}^3\times \text{SO(2)-equivariant}$ Continuous Approach-Constrained Generative Grasp Sampler

Published 18 Oct 2023 in cs.RO | (2310.12113v2)

Abstract: We propose CAPGrasp, an $\mathbb{R}3\times \text{SO(2)-equivariant}$ 6-DoF continuous approach-constrained generative grasp sampler. It includes a novel learning strategy for training CAPGrasp that eliminates the need to curate massive conditionally labeled datasets and a constrained grasp refinement technique that improves grasp poses while respecting the grasp approach directional constraints. The experimental results demonstrate that CAPGrasp is more than three times as sample efficient as unconstrained grasp samplers while achieving up to 38% grasp success rate improvement. CAPGrasp also achieves 4-10% higher grasp success rates than constrained but noncontinuous grasp samplers. Overall, CAPGrasp is a sample-efficient solution when grasps must originate from specific directions, such as grasping in confined spaces.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Z. Weng, H. Lu, J. Lundell, and D. Kragic, “Gonet: An approach-constrained generative grasp sampling network,” in 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids).   IEEE, 2023, pp. 1–7.
  2. M. C. Welle, M. Lippi, H. Lu, J. Lundell, A. Gasparri, and D. Kragic, “Enabling robot manipulation of soft and rigid objects with vision-based tactile sensors,” in 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), 2023, pp. 1–7.
  3. W. Yang, C. Paxton, A. Mousavian, Y.-W. Chao, M. Cakmak, and D. Fox, “Reactive human-to-robot handovers of arbitrary objects,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 3118–3124.
  4. M. Kokic, D. Kragic, and J. Bohg, “Learning task-oriented grasping from human activity datasets,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3352–3359, 2020.
  5. A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2901–2910.
  6. H. Huang, D. Wang, X. Zhu, R. Walters, and R. Platt, “Edge grasp network: A graph-based se (3)-invariant approach to grasp detection,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 3882–3888.
  7. A. Ten Pas, M. Gualtieri, K. Saenko, and R. Platt, “Grasp pose detection in point clouds,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1455–1473, 2017.
  8. D. Morrison, J. Leitner, and P. Corke, “Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach,” in Robotics: Science and Systems XIV.   Robotics: Science and Systems Foundation, Jun. 2018.
  9. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. Aparicio, and K. Goldberg, “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics,” in Robotics: Science and Systems XIII.   Robotics: Science and Systems Foundation, Jul. 2017.
  10. X. Zhou, X. Lan, H. Zhang, Z. Tian, Y. Zhang, and N. Zheng, “Fully convolutional grasp detection network with oriented anchor box,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 7223–7230.
  11. V. Satish, J. Mahler, and K. Goldberg, “On-policy dataset synthesis for learning robot grasping policies using fully convolutional deep networks,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1357–1364, 2019.
  12. S. Kumra, S. Joshi, and F. Sahin, “Antipodal robotic grasping using generative residual convolutional neural network,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, pp. 9626–9633.
  13. D. Wang, M. Jia, X. Zhu, R. Walters, and R. Platt, “On-robot learning with equivariant models,” in Conference on robot learning, 2022.
  14. C. Borst, M. Fischer, and G. Hirzinger, “Grasp planning: How to choose a suitable task wrench space,” in IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, vol. 1.   IEEE, 2004, pp. 319–325.
  15. R. Haschke, J. J. Steil, I. Steuwer, and H. Ritter, “Task-oriented quality measures for dextrous grasping,” in 2005 International Symposium on Computational Intelligence in Robotics and Automation.   IEEE, 2005, pp. 689–694.
  16. A. Murali, W. Liu, K. Marino, S. Chernova, and A. Gupta, “Same object, different grasps: Data and semantic knowledge for task-oriented grasping,” in Conference on Robot Learning.   PMLR, 2021, pp. 1540–1557.
  17. R. Antonova, M. Kokic, J. A. Stork, and D. Kragic, “Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation,” in Proceedings of The 2nd Conference on Robot Learning.   PMLR, Oct. 2018, pp. 641–650.
  18. M. Kokic, J. A. Stork, J. A. Haustein, and D. Kragic, “Affordance detection for task-specific grasping using deep learning,” in 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids).   IEEE, 2017, pp. 91–98.
  19. D. Song, K. Huebner, V. Kyrki, and D. Kragic, “Learning task constraints for robot grasping using graphical models,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2010, pp. 1579–1585.
  20. D. Song, C. H. Ek, K. Huebner, and D. Kragic, “Task-based robot grasp planning using probabilistic inference,” IEEE transactions on robotics, vol. 31, no. 3, pp. 546–561, 2015.
  21. K. Fang, Y. Zhu, A. Garg, A. Kurenkov, V. Mehta, L. Fei-Fei, and S. Savarese, “Learning task-oriented grasping for tool manipulation from simulated self-supervision,” The International Journal of Robotics Research, vol. 39, no. 2-3, pp. 202–216, 2020.
  22. R. Detry, J. Papon, and L. Matthies, “Task-oriented grasping with semantic and geometric scene understanding,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 3266–3273.
  23. A. t. Pas and R. Platt, “Localizing handle-like grasp affordances in 3d point clouds,” in Experimental Robotics: The 14th International Symposium on Experimental Robotics.   Springer, 2016, pp. 623–638.
  24. T. Stoyanov, R. Krug, R. Muthusamy, and V. Kyrki, “Grasp envelopes: Extracting constraints on gripper postures from online reconstructed 3d models,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 885–892.
  25. R. Balasubramanian, L. Xu, P. D. Brook, J. R. Smith, and Y. Matsuoka, “Physical human interactive guidance: Identifying grasping principles from human-planned grasps,” IEEE Transactions on Robotics, vol. 28, no. 4, pp. 899–910, 2012.
  26. J. Lundell, F. Verdoja, T. N. Le, A. Mousavian, D. Fox, and V. Kyrki, “Constrained generative sampling of 6-dof grasps,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 2940–2946.
  27. X. Wu, K. Xu, and P. Hall, “A survey of image synthesis and editing with generative adversarial networks,” Tsinghua Science and Technology, vol. 22, no. 6, pp. 660–674, 2017.
  28. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
  29. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
  30. X. Ding, Y. Wang, Z. Xu, W. J. Welch, and Z. J. Wang, “Continuous conditional generative adversarial networks: Novel empirical losses and label input mechanisms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  31. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” in Advances in Neural Information Processing Systems, vol. 30.   Curran Associates, Inc., 2017.
  32. A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox, “6-dof grasping for target-driven object manipulation in clutter,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 6232–6238.
  33. C. Eppner, A. Mousavian, and D. Fox, “Acronym: A large-scale grasp dataset based on simulation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 6222–6227.
  34. A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
  35. V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa et al., “Isaac gym: High performance gpu based physics simulation for robot learning,” in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
  36. S. Garrido-Jurado, R. Muñoz-Salinas, F. Madrid-Cuevas, and M. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, p. 2280–2292, 2014.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.