Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning Which Side to Scan: Multi-View Informed Active Perception with Side Scan Sonar for Autonomous Underwater Vehicles

Published 2 Feb 2024 in cs.RO | (2402.01106v2)

Abstract: Autonomous underwater vehicles often perform surveys that capture multiple views of targets in order to provide more information for human operators or automatic target recognition algorithms. In this work, we address the problem of choosing the most informative views that minimize survey time while maximizing classifier accuracy. We introduce a novel active perception framework for multi-view adaptive surveying and reacquisition using side scan sonar imagery. Our framework addresses this challenge by using a graph formulation for the adaptive survey task. We then use Graph Neural Networks (GNNs) to both classify acquired sonar views and to choose the next best view based on the collected data. We evaluate our method using simulated surveys in a high-fidelity side scan sonar simulator. Our results demonstrate that our approach is able to surpass the state-of-the-art in classification accuracy and survey efficiency. This framework is a promising approach for more efficient autonomous missions involving side scan sonar, such as underwater exploration, marine archaeology, and environmental monitoring.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. K. Sun, W. Cui, and C. Chen, “Review of underwater sensing technologies and applications,” Sensors, vol. 21, no. 23, p. 7849, 2021.
  2. H. Medwin, C. S. Clay, and T. K. Stanton, “Fundamentals of acoustical oceanography,” 1999.
  3. P. Blondel, The handbook of sidescan sonar. Springer Science & Business Media, 2010.
  4. R. Urick, “The processes of sound scattering at the ocean surface and bottom,” Journal of Marine Research, vol. 15, no. 2, pp. 134–148, 1956.
  5. C. M. McKinney and C. Anderson, “Measurements of backscattering of sound from the ocean bottom,” The Journal of the Acoustical Society of America, vol. 36, no. 1, pp. 158–163, 1964.
  6. P. Woock and C. Frey, “Deep-sea auv navigation using side-scan sonar images and slam,” in OCEANS’10 IEEE SYDNEY, pp. 1–8, 2010.
  7. J. Shin, S. Chang, J. Weaver, J. C. Isaacs, B. Fu, and S. Ferrari, “Informative multiview planning for underwater sensors,” IEEE Journal of Oceanic Engineering, vol. 47, no. 3, pp. 780–798, 2022.
  8. Y. Huang, A. Conkey, and T. Hermans, “Planning for Multi-Object Manipulation with Graph Neural Network Relational Classifiers,” in IEEE International Conference on Robotics and Automation (ICRA), 2023.
  9. E. S. Lee, L. Zhou, A. Ribeiro, and V. Kumar, “Graph neural networks for decentralized multi-agent perimeter defense,” Frontiers in Control Engineering, vol. 4, 2023.
  10. M. Lingelbach, C. Li, M. Hwang, A. Kurenkov, A. Lou, R. Martín-Martín, R. Zhang, L. Fei-Fei, and J. Wu, “Task-driven graph attention for hierarchical relational object navigation,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 886–893, 2023.
  11. H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, “Multi-view convolutional neural networks for 3d shape recognition,” in Proc. ICCV, 2015.
  12. X. Wei, R. Yu, and J. Sun, “View-gcn: View-based graph convolutional network for 3d shape analysis,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1847–1856, 2020.
  13. A. Kanezaki, Y. Matsushita, and Y. Nishida, “Rotationnet for joint object categorization and unsupervised pose estimation from multi-view images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 269–283, 2021.
  14. X. Wei, R. Yu, and J. Sun, “Learning view-based graph convolutional network for multi-view 3d shape analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 6, pp. 7525–7541, 2023.
  15. M. Lauri, J. Pajarinen, J. Peters, and S. Frintrop, “Multi-sensor next-best-view planning as matroid-constrained submodular maximization,” IEEE Robotics and Automation Letters, vol. 5, pp. 5323–5330, 2020.
  16. R. Menon, T. Zaenker, N. Dengler, and M. Bennewitz, “Nbv-sc: Next best view planning based on shape completion for fruit mapping and reconstruction,” 2023.
  17. X. Zeng, T. Zaenker, and M. Bennewitz, “Deep reinforcement learning for next-best-view planning in agricultural applications,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 2323–2329, 2022.
  18. D. Peralta, J. Casimiro, A. M. Nilles, J. A. Aguilar, R. Atienza, and R. Cajote, “Next-best view policy for 3d reconstruction,” in Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pp. 558–573, Springer, 2020.
  19. R. Zeng, W. Zhao, and Y.-J. Liu, “Pc-nbv: A point cloud based deep network for efficient next best view planning,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7050–7057, 2020.
  20. Q. V. Le, A. Saxena, and A. Y. Ng, “Active perception: Interactive manipulation for improving object detection,” Standford University Journal, 2008.
  21. E. Johns, S. Leutenegger, and A. J. Davison, “Pairwise decomposition of image sequences for active multi-view recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3813–3822, 2016.
  22. V. Myers and D. P. Williams, “A pomdp for multi-view target classification with an autonomous underwater vehicle,” in OCEANS 2010 MTS/IEEE SEATTLE, pp. 1–5, 2010.
  23. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations (ICLR), 2017.
  24. J. Bell, A Model for the Simulation of Sidescan Sonar. Heriot-Watt University, 1995.
  25. S. G. Parker, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock, D. Luebke, D. McAllister, M. McGuire, K. Morley, A. Robison, and M. Stich, “Optix: A general purpose ray tracing engine,” ACM Trans. Graph., vol. 29, jul 2010.
  26. P. Marshall, F. Felix, and A. Gavrilov, “Blender rock generator.” https://github.com/versluis/Rock-Generator, 2019.
  27. S. Höfer, K. Bekris, A. Handa, J. C. Gamboa, M. Mozifian, F. Golemo, C. Atkeson, D. Fox, K. Goldberg, J. Leonard, et al., “Sim2real in robotics and automation: Applications and challenges,” IEEE transactions on automation science and engineering, vol. 18, no. 2, pp. 398–400, 2021.
  28. A. V. Sethuraman and K. A. Skinner, “Stars: Zero-shot sim-to-real transfer for segmentation of shipwrecks in sonar imagery,” in 34th British Machine Vision Conference 2023, BMVC 2023, Aberdeen, UK, November 20-24, 2023, BMVA, 2023.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.