Papers
Topics
Authors
Recent
Search
2000 character limit reached

LiSTA: Geometric Object-Based Change Detection in Cluttered Environments

Published 4 Mar 2024 in cs.RO | (2403.02175v2)

Abstract: We present LiSTA (LiDAR Spatio-Temporal Analysis), a system to detect probabilistic object-level change over time using multi-mission SLAM. Many applications require such a system, including construction, robotic navigation, long-term autonomy, and environmental monitoring. We focus on the semi-static scenario where objects are added, subtracted, or changed in position over weeks or months. Our system combines multi-mission LiDAR SLAM, volumetric differencing, object instance description, and correspondence grouping using learned descriptors to keep track of an open set of objects. Object correspondences between missions are determined by clustering the object's learned descriptors. We demonstrate our approach using datasets collected in a simulated environment and a real-world dataset captured using a LiDAR system mounted on a quadruped robot monitoring an industrial facility containing static, semi-static, and dynamic objects. Our method demonstrates superior performance in detecting changes in semi-static environments compared to existing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. H. Chen and Z. Shi, “A spatial-temporal attention-based method and a new dataset for remote sensing image change detection,” Remote. Sens., vol. 12, no. 10, p. 1662, 2020.
  2. L. Wellhausen, R. Dubé, A. Gawel, R. Siegwart, and C. Cadena, “Reliable real-time change detection and mapping for 3d lidars,” in IEEE Intl. Symposium on Safety, Security and Rescue Robotics, pp. 81–87, 2017.
  3. D. Marinelli, C. Paris, and L. Bruzzone, “A novel approach to 3-d change detection in multitemporal lidar data acquired in forest areas,” IEEE Trans. Geosci. Remote. Sens., vol. 56, no. 6, pp. 3030–3046, 2018.
  4. Q. Rongjun, T. Jiaojiao, and P. Reinartz, “3D change detection – approaches and applications,” ISPRS J. of Photogrammetry and Remote Sensing (P&RS), vol. 122, pp. 41–56, 2016.
  5. J. Huang, J. Wang, Y. Tan, D. Wu, and Y. Cao, “An automatic analog instrument reading system using computer vision and inspection robot,” IEEE Trans. Instrum. Meas., vol. 69, no. 9, pp. 6322–6335, 2020.
  6. T. H. Chung, V. Orekhov, and A. Maio, “Into the Robotic Depths: Analysis and Insights from the DARPA Subterranean Challenge,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 6, pp. 477–502, 5 2023.
  7. A. Singh, “Digital change detection techniques using remotely-sensed data,” Intl. J. of Remote Sensing, vol. 10, no. 6, pp. 989–1003, 1989.
  8. T. Ku, S. Galanakis, B. Boom, R. C. Veltkamp, D. Bangera, S. Gangisetty, N. Stagakis, G. Arvanitis, and K. Moustakas, “SHREC 2021: 3D point cloud change detection for street scenes,” Comput. Graph., vol. 99, pp. 192–200, 2021.
  9. J. Saarinen, H. Andreasson, and A. J. Lilienthal, “Independent markov chain occupancy grid maps for representation of dynamic environment,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), pp. 3489–3495, 2012.
  10. H. Andreasson, M. Magnusson, and A. J. Lilienthal, “Has something changed here? autonomous difference detection for security patrol robots,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), pp. 3429–3435, 2007.
  11. M. Fehr, F. Furrer, I. Dryanovski, J. Sturm, I. Gilitschenski, R. Siegwart, and C. Cadena, “TSDF-based Change Detection for Consistent Long-Term Dense Reconstruction and Dynamic Object Discovery,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2017.
  12. C. Debeunne and D. Vivet, “A review of visual-lidar fusion based simultaneous localization and mapping,” MDPI J. of Sensors, vol. 20, no. 7, p. 2068, 2020.
  13. K. Alhamzi, M. M. Elmogy, and S. I. Barakat, “3D object recognition based on local and global features using point cloud library,” Intl. J. of Advancements in Computing Technology, vol. 7, pp. 43–54, 2015.
  14. J. Mingyang, W. Yiran, and L. Cewu, “PointSIFT: A SIFT-like network module for 3D point cloud semantic segmentation,” ArXiv, vol. abs/1807.00652, 2018.
  15. F. Zhang, J. Fang, B. W. Wah, and P. H. S. Torr, “Deep FusionNet for point cloud semantic segmentation,” in Eur. Conf. on Computer Vision (ECCV), vol. 12369, pp. 644–663, 2020.
  16. G. Arbeiter, S. Fuchs, R. Bormann, J. Fischer, and A. Verl, “Evaluation of 3D feature descriptors for classification of surface geometries in point clouds,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), pp. 1644–1650, 2012.
  17. A. Aldoma, Z. C. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, R. Rusu, S. Gedikli, and M. Vincze, “Tutorial: Point cloud library: Three-dimensional object recognition and 6 DOF pose estimation,” IEEE Robotics and Automation Magazine, vol. 19, no. 3, pp. 80–91, 2012.
  18. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), pp. 77–85, 2017.
  19. L. Zhang, S. Tejaswi Digumarti, G. Tinchev, and M. Fallon, “InstaLoc: One-shot Global Lidar Localisation in Indoor Environments through Instance Learning,” in Robotics: Science and Systems (RSS), 2023.
  20. B. O. Arnesen, S. S. Sandøy, I. Schjølberg, J. A. Alfredsen, and I. B. Utne, “OctoMap: A probabilistic, flexible, and compact 3D map representation for robotic systems,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2010.
  21. Y. Wang, M. Ramezani, and M. Fallon, “Actively mapping industrial Structures with information gain-based planning on a quadruped robot,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 8609–8615, 5 2020.
  22. V. Reijgwart, A. Millane, H. Oleynikova, R. Siegwart, C. Cadena, and J. Nieto, “Voxgraph: Globally Consistent, Volumetric Mapping using Signed Distance Function Submaps,” IEEE Robot. Autom. Lett. (RA-L), vol. 5, pp. 227–234, 4 2020.
  23. V. Reijgwart, C. Cadena, R. Siegwart, and L. Ott, “Efficient volumetric mapping of multi-scale environments using wavelet-based compression,” in Robotics: Science and Systems (RSS), 2023.
  24. M. Camurri, L. Zhang, D. Wisth, and M. Fallon, “HILTI SLAM Challenge Submission: VILENS and SLAM,” tech. rep., Oxford Robotics Institute, University of Oxford, 2022.
  25. F. Dellaert and M. Kaess, “Factor Graphs for Robot Perception,” Foundations and trends in robotics, vol. 6, pp. 1–139, 2017.
  26. G. Kim and A. Kim, “Scan context: Egocentric spatial descriptor for place recognition within 3D point cloud map,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), pp. 4802–4809, 2018.
  27. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: an efficient probabilistic 3D mapping framework based on octrees,” Auton. Robots, vol. 34, no. 3, pp. 189–206, 2013.
  28. P. Lancaster and K. Salkauskas, “Surfaces generated by moving least squares methods,” Mathematics of Computation, vol. 37, no. 155, pp. 141–158, 1981.
  29. T. Kodinariya and P. Makwana, “Review on determining of cluster in K-means clustering,” Intl. J. of Advance Research in Computer Science and Management Studies, vol. 1, pp. 90–95, 01 2013.
  30. S. Marden and J. Guivant, “Improving the Performance of ICP for Real-Time Applications using an Approximate Nearest Neighbour Search,” in Australasian Conference on Robotics and Automation (ACRA), 2012.
  31. S. Shah, D. Dey, C. Lovett, and A. Kapoor, “AirSim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics, vol. 5, pp. 621–635, 2017.
Citations (1)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.