Papers
Topics
Authors
Recent
Search
2000 character limit reached

Map-aided annotation for pole base detection

Published 4 Mar 2024 in eess.IV and cs.CV | (2403.01868v1)

Abstract: For autonomous navigation, high definition maps are a widely used source of information. Pole-like features encoded in HD maps such as traffic signs, traffic lights or street lights can be used as landmarks for localization. For this purpose, they first need to be detected by the vehicle using its embedded sensors. While geometric models can be used to process 3D point clouds retrieved by lidar sensors, modern image-based approaches rely on deep neural network and therefore heavily depend on annotated training data. In this paper, a 2D HD map is used to automatically annotate pole-like features in images. In the absence of height information, the map features are represented as pole bases at the ground level. We show how an additional lidar sensor can be used to filter out occluded features and refine the ground projection. We also demonstrate how an object detector can be trained to detect a pole base. To evaluate our methodology, it is first validated with data manually annotated from semantic segmentation and then compared to our own automatically generated annotated data recorded in the city of Compi{`e}gne, France. Erratum: In the original version [1], an error occurred in the accuracy evaluation of the different models studied and the evaluation method applied on the detection results was not clearly defined. In this revision, we offer a rectification to this segment, presenting updated results, especially in terms of Mean Absolute Errors (MAE).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. B. Missaoui, M. Noizet, and P. Xu, “Map-aided annotation for pole base detection,” in IEEE Intelligent Vehicles Symposium Workshop, June 2023.
  2. L. Li, M. Yang, L. Weng, and C. Wang, “Robust localization for intelligent vehicles based on pole-like features using the point cloud,” IEEE Transactions on Automation Science and Engineering, pp. 1–14, 2021.
  3. M. Sefati, M. Daum, B. Sondermann, K. D. Kreiskother, and A. Kampker, “Improving vehicle localization using semantic and pole-like landmarks,” in IEEE Intelligent Vehicles Symposium, Los Angeles, CA, USA, June 2017, pp. 13–19.
  4. R. Spangenberg, D. Goehring, and R. Rojas, “Pole-based localization for autonomous vehicles in urban scenarios,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, South Korea, Oct. 2016, pp. 2161–2166.
  5. M. Gouda, A. Shalkamy, X. Li, and K. El-Basyouny, “Fully automated algorithm for light pole detection and mapping in rural highway environment using mobile light detection and ranging point clouds,” Transportation Research Record: Journal of the Transportation Research Board, vol. 2676, no. 7, pp. 617–629, July 2022.
  6. M. Lehtomäki, A. Jaakkola, J. Hyyppä, A. Kukko, and H. Kaartinen, “Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data,” Remote Sensing, vol. 2, no. 3, pp. 641–664, Feb. 2010.
  7. B. Rodríguez-Cuenca, S. García-Cortés, C. Ordóñez, and M. Alonso, “Automatic detection and classification of pole-like objects in urban point cloud data using an anomaly detection algorithm,” Remote Sensing, vol. 7, no. 10, pp. 12 680–12 703, Sept. 2015.
  8. F. Ghallabi, G. El-Haj-Shhade, M.-A. Mittet, and F. Nashashibi, “Lidar-based road signs detection for vehicle localization in an HD map,” in IEEE Intelligent Vehicles Symposium, Paris, France, June 2019, pp. 1484–1490.
  9. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in Computer Vision – ECCV, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds.   Cham: Springer International Publishing, 2014, pp. 740–755.
  10. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.
  11. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A dataset for semantic scene understanding of lidar sequences,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2019.
  12. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The Cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp. 3212–3223.
  13. F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 2636–2645.
  14. H. Dong, X. Chen, S. Särkkä, and C. Stachniss, “Online pole segmentation on range images for long-term lidar localization in urban environments,” Robotics and Autonomous Systems, vol. 159, p. 104283, 2023.
  15. C. Sun, J. M. U. Vianney, Y. Li, L. Chen, L. Li, F.-Y. Wang, A. Khajepour, and D. Cao, “Proximity based automatic data annotation for autonomous driving,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 2, pp. 395–404, 2020.
  16. W. H. Lee, K. Jung, C. Kang, and H. S. Chang, “Semi-automatic framework for traffic landmark annotation,” IEEE Open Journal of Intelligent Transportation Systems, vol. 2, pp. 1–12, 2021.
  17. S. Lee, H. Lim, and H. Myung, “Patchwork++: Fast and robust ground segmentation solving partial under-segmentation using 3d point cloud,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 13 276–13 283.
  18. C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv preprint:2207.02696, 2022.
  19. L. Vilalta Estrada, C. Muñoz García, E. Domínguez Tijero, M. Noizet, P. Xu, S. Y. Voon, S. Guerassimov, and W. W. Cox, “ERASMO – Enhanced Receiver for AutonomouS MObility,” in Proceedings of the 15th ITS European Congress, May 2023.
Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.