Papers
Topics
Authors
Recent
Search
2000 character limit reached

Exosense: A Vision-Based Scene Understanding System For Exoskeletons

Published 21 Mar 2024 in cs.RO and cs.CV | (2403.14320v3)

Abstract: Self-balancing exoskeletons are a key enabling technology for individuals with mobility impairments. While the current challenges focus on human-compliant hardware and control, unlocking their use for daily activities requires a scene perception system. In this work, we present Exosense, a vision-centric scene understanding system for self-balancing exoskeletons. We introduce a multi-sensor visual-inertial mapping device as well as a navigation stack for state estimation, terrain mapping and long-term operation. We tested Exosense attached to both a human leg and Wandercraft's Personal Exoskeleton in real-world indoor scenarios. This enabled us to test the system during typical periodic walking gaits, as well as future uses in multi-story environments. We demonstrate that Exosense can achieve an odometry drift of about 4 cm per meter traveled, and construct terrain maps under 1 cm average reconstruction error. It can also work in a visual localization mode in a previously mapped environment, providing a step towards long-term operation of exoskeletons.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. T. Gurriet, S. Finet, G. Boeris, A. Duburcq, A. Hereid, O. Harib, M. Masselin, J. Grizzle, and A. D. Ames, “Towards restoring locomotion for paraplegics: Realizing dynamically stable walking on exoskeletons,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2018, pp. 2804–2811.
  2. T. Yan, M. Cempini, C. M. Oddo, and N. Vitiello, “Review of assistive strategies in powered lower-limb orthoses and exoskeletons,” Robot. Auton. Syst., vol. 64, pp. 120–136, 2015.
  3. A. G. Kurbis, B. Laschowski, and A. Mihailidis, “Stair recognition for robotic exoskeleton control using computer vision and deep learning,” in International Conference on Rehabilitation Robotics (ICORR), 2022, pp. 1–6.
  4. K. Karacan, J. T. Meyer, H. I. Bozma, R. Gassert, and E. Samur, “An environment recognition and parameterization system for shared-control of a powered lower-limb exoskeleton,” in IEEE RAS EMBS Int. Conf. Biomed. Robot. Biomechatronics, 2020, pp. 623–628.
  5. A. H. A. Al-Dabbagh and R. Ronsse, “Depth vision-based terrain detection algorithm during human locomotion,” IEEE Transactions on Medical Robotics and Bionics, vol. 4, no. 4, pp. 1010–1021, 2022.
  6. D.-X. Liu, J. Xu, C. Chen, X. Long, D. Tao, and X. Wu, “Vision-assisted autonomous lower-limb exoskeleton robot,” IEEE Trans. Syst. Man Cybern. Syst., vol. 51, no. 6, pp. 3759–3770, 2021.
  7. N. Hughes, Y. Chang, and L. Carlone, “Hydra: A Real-time Spatial Perception System for 3D Scene Graph Construction and Optimization,” in Robotics: Science and Systems (RSS), 2022.
  8. M. Ramanathan, L. Luo, J. K. Er, M. J. Foo, C. H. Chiam, L. Li, W. Y. Yau, and W. T. Ang, “Visual environment perception for obstacle detection and crossing of lower-limb exoskeletons,” in IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2022, pp. 12 267–12 274.
  9. W. Bao, D. Villarreal, and J.-C. Chiao, “Vision-based autonomous walking in a lower-limb powered exoskeleton,” in IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), 2020, pp. 830–834.
  10. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous Robots, vol. 34, pp. 189–206, 2013.
  11. P. Fankhauser, M. Bloesch, and M. Hutter, “Probabilistic terrain mapping for mobile robots with uncertain localization,” IEEE Robot. Autom. Lett. (RA-L), vol. 3, no. 4, pp. 3019–3026, 2018.
  12. S. Kuindersma, R. Deits, M. F. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, and R. Tedrake, “Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot,” Autonomous Robots, vol. 40, no. 3, pp. 429–455, 2016.
  13. D. D. Fan, K. Otsu, Y. Kubo, A. Dixit, J. Burdick, and A. Agha-mohammadi, “STEP: stochastic traversability evaluation and planning for risk-aware off-road navigation,” in Robotics: Science and Systems (RSS), 2021.
  14. X. Meng, N. Hatch, A. Lambert, A. Li, N. Wagener, M. Schmittle, J. Lee, W. Yuan, Z. Q. Chen, S. Deng, G. Okopal, D. Fox, B. Boots, and A. Shaban, “Terrainnet: Visual modeling of complex terrain for high-speed, off-road navigation,” in Robotics: Science and Systems (RSS), 2023.
  15. T. Miki, L. Wellhausen, R. Grandia, F. Jenelten, T. Homberger, and M. Hutter, “Elevation mapping for locomotion and navigation using gpu,” in IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2022, pp. 2273–2280.
  16. G. Erni, J. Frey, T. Miki, M. Mattamala, and M. Hutter, “MEM: multi-modal elevation mapping for robotics and learning,” in IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2023, pp. 11 011–11 018.
  17. P. Ewen, A. Li, Y. Chen, S. Hong, and R. Vasudevan, “These maps are made for walking: Real-time terrain property estimation for mobile robots,” IEEE Robot. Autom. Lett. (RA-L), vol. 7, no. 3, pp. 7083–7090, 2022.
  18. H. Bavle, J. L. Sanchez-Lopez, M. Shaheer, J. Civera, and H. Voos, “S-Graphs+: Real-Time Localization and Mapping Leveraging Hierarchical Representations,” IEEE Robot. Autom. Lett. (RA-L), vol. 8, no. 8, pp. 4927–4934, 2023.
  19. S. Peng, K. Genova, C. M. Jiang, A. Tagliasacchi, M. Pollefeys, and T. Funkhouser, “OpenScene: 3D Scene Understanding with Open Vocabularies,” in IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), 2023.
  20. J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik, “LERF: Language Embedded Radiance Fields,” in Intl. Conf. on Computer Vision (ICCV), 2023.
  21. N. M. M. Shafiullah, C. Paxton, L. Pinto, S. Chintala, and A. Szlam, “CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory,” in Robotics: Science and Systems (RSS), 2023.
  22. L. Zhang, D. Wisth, M. Camurri, and M. Fallon, “Balancing the budget: Feature selection and tracking for multi-camera visual-inertial odometry,” IEEE Robot. Autom. Lett. (RA-L), vol. 7, no. 2, pp. 1182–1189, 2022.
  23. Z. Zhang, H. Rebecq, C. Forster, and D. Scaramuzza, “Benefit of large field-of-view cameras for visual odometry,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2016, pp. 801–808.
  24. P. Geneva, K. Eckenhoff, W. Lee, Y. Yang, and G. Huang, “Openvins: A research platform for visual-inertial estimation,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2020, pp. 4666–4672.
  25. C. Campos, R. Elvira, J. J. Gomez, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM,” IEEE Trans. Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
  26. C. Kassab, M. Mattamala, L. Zhang, and M. Fallon, “Language-EXtended Indoor SLAM (LEXIS): A Versatile System for Real-time Visual Scene Understanding,” in IEEE Int. Conf. Robot. Autom. (ICRA), 2024.
  27. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning transferable visual models from natural language supervision,” in Intl. Conf. on Machine Learning (ICML), 2021.
  28. M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Commun. ACM, vol. 24, no. 6, p. 381–395, 1981.
  29. M. Bosse, P. M. Newman, J. J. Leonard, and S. J. Teller, “Simultaneous localization and map building in large-scale cyclic environments using the atlas framework,” Intl. J. of Robot. Res., vol. 23, no. 12, pp. 1113–1139, 2004.
  30. P. Fankhauser and M. Hutter, “A Universal Grid Map Library: Implementation and Use Case for Rough Terrain Navigation,” in Robot Operating System (ROS) – The Complete Reference (Volume 1), A. Koubaa, Ed.   Springer, 2016, ch. 5.
  31. M. Ramezani, G. Tinchev, E. Iuganov, and M. Fallon, “Online lidar-slam for legged robots with robust registration and deep-learned loop closure,” in IEEE Int. Conf. Robot. Autom. (ICRA).   IEEE, 2020, pp. 4158–4164.
  32. E. Olson, “AprilTag: A robust and flexible visual fiducial system,” in IEEE Int. Conf. Robot. Autom. (ICRA).   IEEE, May 2011, pp. 3400–3407.
  33. M. Wermelinger, P. Fankhauser, R. Diethelm, P. Krüsi, R. Siegwart, and M. Hutter, “Navigation planning for legged robots in challenging terrain,” in IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2016, pp. 1184–1189.
  34. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische mathematik, vol. 1, no. 1, pp. 269–271, 1959.
  35. L. Kavraki, P. Svestka, J.-C. Latombe, and M. Overmars, “Probabilistic roadmaps for path planning in high-dimensional configuration spaces,” IEEE Trans. Robotics and Automation, vol. 12, no. 4, pp. 566–580, 1996.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.