Papers
Topics
Authors
Recent
Search
2000 character limit reached

A9 Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside Perception

Published 15 Jun 2023 in cs.CV | (2306.09266v1)

Abstract: Intelligent Transportation Systems (ITS) allow a drastic expansion of the visibility range and decrease occlusions for autonomous driving. To obtain accurate detections, detailed labeled sensor data for training is required. Unfortunately, high-quality 3D labels of LiDAR point clouds from the infrastructure perspective of an intersection are still rare. Therefore, we provide the A9 Intersection Dataset, which consists of labeled LiDAR point clouds and synchronized camera images. Here, we recorded the sensor output from two roadside cameras and LiDARs mounted on intersection gantry bridges. The point clouds were labeled in 3D by experienced annotators. Furthermore, we provide calibration data between all sensors, which allow the projection of the 3D labels into the camera images and an accurate data fusion. Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes. With ten object classes, it has a high diversity of road users in complex driving maneuvers, such as left and right turns, overtaking, and U-turns. In experiments, we provided multiple baselines for the perception tasks. Overall, our dataset is a valuable contribution to the scientific community to perform complex 3D camera-LiDAR roadside perception tasks. Find data, code, and more information at https://a9-dataset.com.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
  2. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele, “The cityscapes dataset for semantic urban scene understanding.” [Online]. Available: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Cordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf
  3. R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, “The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems,” in 2018 IEEE Intelligent Transportation Systems Conference.   Piscataway, NJ: IEEE, 2018, pp. 2118–2125.
  4. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621–11 631.
  5. P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2446–2454.
  6. J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The ind dataset: A drone dataset of naturalistic road user trajectories at german intersections,” in 2020 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2020, pp. 1929–1934.
  7. R. Krajewski, T. Moers, J. Bock, L. Vater, and L. Eckstein, “The round dataset: A drone dataset of road user trajectories at roundabouts in germany,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).   Piscataway, NJ: IEEE, 2020, pp. 1–6.
  8. T. Moers, L. Vater, R. Krajewski, J. Bock, A. Zlocki, and L. Eckstein, “The exid dataset: A real-world trajectory dataset of highly interactive highway scenarios in germany,” in 2022 IEEE Intelligent Vehicles Symposium (IV).   Piscataway, NJ: IEEE, 2022, pp. 958–964.
  9. L. Gressenbuch, K. Esterle, T. Kessler, and M. Althoff, “Mona: The munich motion dataset of natural driving,” in 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2022, pp. 2093–2100.
  10. H. Yu, Y. Luo, M. Shu, Y. Huo, Z. Yang, Y. Shi, Z. Guo, H. Li, X. Hu, J. Yuan et al., “Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21 361–21 370.
  11. H. Wang, X. Zhang, Z. Li, J. Li, K. Wang, Z. Lei, and R. Haibing, “Ips300+: a challenging multi-modal data sets for intersection perception system,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 2539–2545.
  12. X. Ye, M. Shu, H. Li, Y. Shi, Y. Li, G. Wang, X. Tan, and E. Ding, “Rope3d: The roadside perception dataset for autonomous driving and monocular 3d object detection task,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21 341–21 350.
  13. S. Busch, C. Koetsier, J. Axmann, and C. Brenner, “Lumpi: The leibniz university multi-perspective intersection dataset,” in 2022 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2022, pp. 1127–1134.
  14. C. Creß, W. Zimmer, L. Strand, M. Fortkord, S. Dai, V. Lakshminarasimhan, and A. Knoll, “A9-dataset: Multi-sensor infrastructure-based dataset for mobility research,” in 2022 IEEE Intelligent Vehicles Symposium (IV), 2022, pp. 965–970.
  15. C. Creß, Z. Bing, and A. C. Knoll, “Intelligent transportation systems using external infrastructure: A literature survey.” [Online]. Available: https://arxiv.org/pdf/2112.05615
  16. J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The ind dataset: A drone dataset of naturalistic road user trajectories at german intersections,” in 2020 IEEE Intelligent Vehicles Symposium (IV).   Piscataway, NJ: IEEE, 2020, pp. 1929–1934.
  17. A. Krämmer, C. Schöller, D. Gulati, V. Lakshminarasimhan, F. Kurz, D. Rosenbaum, C. Lenz, and A. Knoll, “Providentia-a large-scale sensor system for the assistance of autonomous vehicles and its evaluation,” Journal of Field Robotics, pp. 1156–1176, 2022.
  18. “Projekt providentia++,” 2022. [Online]. Available: https://innovation-mobility.com/projekt-providentia/
  19. Y. Chongjian, X. Liu, X. Hong, and F. Zhang, “Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments,” IEEE Robotics and Automation Letters, vol. PP, pp. 1–1, 07 2021.
  20. W. Zimmer, A. Rangesh, and M. Trivedi, “3d bat: A semi-automatic, web-based 3d annotation toolbox for full-surround, multi-modal data streams,” in 2019 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2019, pp. 1816–1821.
  21. N. Hagedorn, “OpenLABEL Concept Paper.” [Online]. Available: https://www.asam.net/project-detail/asam-openlabel-v100/
  22. L. Strand, J. Honer, and A. Knoll, “Systematic error source analysis of a real-world multi-camera traffic surveillance system,” in 2022 25th International Conference on Information Fusion (FUSION).   IEEE, 7/4/2022 - 7/7/2022, pp. 1–8.
  23. W. Zimmer, J. Birkner, M. Brucker, H. T. Nguyen, S. Petrovski, B. Wang, and A. C. Knoll, “Infradet3d: Multi-modal 3d object detection based on roadside infrastructure camera and lidar sensors,” arXiv preprint arXiv:2305.00314, 2023.
  24. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 12 697–12 705.
Citations (10)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.