Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards Trustworthy Automated Driving through Qualitative Scene Understanding and Explanations

Published 25 Mar 2024 in cs.AI | (2403.16908v1)

Abstract: Understanding driving scenes and communicating automated vehicle decisions are key requirements for trustworthy automated driving. In this article, we introduce the Qualitative Explainable Graph (QXG), which is a unified symbolic and qualitative representation for scene understanding in urban mobility. The QXG enables interpreting an automated vehicle's environment using sensor data and machine learning models. It utilizes spatio-temporal graphs and qualitative constraints to extract scene semantics from raw sensor inputs, such as LiDAR and camera data, offering an interpretable scene model. A QXG can be incrementally constructed in real-time, making it a versatile tool for in-vehicle explanations across various sensor types. Our research showcases the potential of QXG, particularly in the context of automated driving, where it can rationalize decisions by linking the graph with observed actions. These explanations can serve diverse purposes, from informing passengers and alerting vulnerable road users to enabling post-hoc analysis of prior behaviors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Scene Understanding in Deep Learning-Based End-to-End Controllers for Autonomous Vehicles. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(1):53–63, January 2019. ISSN 2168-2232. doi: 10.1109/TSMC.2018.2868372. URL https://ieeexplore.ieee.org/document/8480450. Conference Name: IEEE Transactions on Systems, Man, and Cybernetics: Systems.
  2. Vision-Based Semantic Segmentation in Scene Understanding for Autonomous Driving: Recent Achievements, Challenges, and Outlooks. IEEE Transactions on Intelligent Transportation Systems, 23(12):22694–22715, 2022. doi: 10.1109/TITS.2022.3207665.
  3. Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions. IEEE Transactions on Intelligent Transportation Systems, 22(7):4316–4336, 2021. doi: 10.1109/TITS.2020.3032227.
  4. Explanations in Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems, 23(8):10142–10162, 2022. doi: 10.1109/TITS.2021.3122865.
  5. What drives the acceptance of autonomous driving? An investigation of acceptance factors from an end-user’s perspective. Technological Forecasting and Social Change, 161, October 2020. doi: 10.1016/j.techfore.2020.120319.
  6. Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions, February 2023. URL http://arxiv.org/abs/2112.11561. arXiv:2112.11561 [cs].
  7. Toward explainable and advisable model for self-driving cars. Applied AI Letters, 2(4):e56, 2021. doi: https://doi.org/10.1002/ail2.56. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/ail2.56.
  8. nuScenes: A Multimodal Dataset for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  9. A Survey of Qualitative Spatial and Temporal Calculi: Algebraic and Computational Properties. ACM Computing Surveys, 50(1):7:1–7:39, 2015. ISSN 0360-0300. doi: 10.1145/3038927. URL https://doi.org/10.1145/3038927.
  10. James F. Allen. Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11):832–843, November 1983. ISSN 0001-0782. doi: 10.1145/182.358434. URL https://doi.org/10.1145/182.358434.
  11. Qualitative Spatial Reasoning Using Constraint Calculi. In Handbook of Spatial Logics, pages 161–215. Springer Netherlands, Dordrecht, 2007. ISBN 978-1-4020-5586-7 978-1-4020-5587-4. doi: 10.1007/978-1-4020-5587-4˙4. URL http://link.springer.com/10.1007/978-1-4020-5587-4˙4.
  12. Using Ontologies for the Formalization and Recognition of Criticality for Automated Driving, 2022. _eprint: 2205.01532.
  13. Commonsense visual sensemaking for autonomous driving – On generalised neurosymbolic online abduction integrating vision and semantics. Artificial Intelligence, 299:103522, 2021. ISSN 0004-3702. doi: https://doi.org/10.1016/j.artint.2021.103522. URL https://www.sciencedirect.com/science/article/pii/S0004370221000734.
  14. A Survey of Scene Understanding by Event Reasoning in Autonomous Driving. International Journal of Automation and Computing, 15(3):249–266, June 2018. ISSN 1476-8186, 1751-8520. doi: 10.1007/s11633-018-1126-y. URL http://link.springer.com/10.1007/s11633-018-1126-y.
  15. Constraint acquisition. Artificial Intelligence, 244:315–342, March 2017. ISSN 0004-3702. doi: 10.1016/j.artint.2015.08.001. URL https://www.sciencedirect.com/science/article/pii/S0004370215001162.
  16. GEQCA: Generic Qualitative Constraint Acquisition. In 36th AAAI Conference On Artificial Intelligence, February 2022.
  17. Autonomous driving in urban environments: approaches, lessons and challenges. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 368(1928):4649–4672, 2010. ISSN 1364-503X. URL https://www.jstor.org/stable/20752685.
  18. Junior: The Stanford Entry in the Urban Challenge. In Martin Buehler, Karl Iagnemma, and Sanjiv Singh, editors, The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, Springer Tracts in Advanced Robotics, pages 91–123. Springer, Berlin, Heidelberg, 2009. ISBN 9783642039911. doi: 10.1007/978-3-642-03991-1˙3. URL https://doi.org/10.1007/978-3-642-03991-1˙3.
  19. Steven E. Shladover. Review of the State of Development of Advanced Vehicle Control Systems (AVCS). Vehicle System Dynamics, 24(6-7):551–595, July 1995. ISSN 0042-3114, 1744-5159. doi: 10.1080/00423119508969108. URL http://www.tandfonline.com/doi/abs/10.1080/00423119508969108.
  20. Sebastian Thrun. Toward robotic cars. Communications of the ACM, 53(4):99–106, April 2010. ISSN 0001-0782, 1557-7317. doi: 10.1145/1721654.1721679. URL https://dl.acm.org/doi/10.1145/1721654.1721679.
  21. Path Planning for Autonomous Vehicles in Unknown Semi-structured Environments. The International Journal of Robotics Research, 29(5):485–501, April 2010. ISSN 0278-3649, 1741-3176. doi: 10.1177/0278364909359210. URL http://journals.sagepub.com/doi/10.1177/0278364909359210.
  22. A Data-Driven Radar Object Detection and Clustering Method Aided by Camera. pages 2020–01–5035, February 2020. doi: 10.4271/2020-01-5035. URL https://www.sae.org/content/2020-01-5035/.
  23. Map-Based Precision Vehicle Localization in Urban Environments. In Robotics: Science and Systems III, pages 121–128. MIT Press, 2008. ISBN 9780262255868. URL https://ieeexplore.ieee.org/document/6280129.
  24. Predicting Steering Actions for Self-Driving Cars Through Deep Learning. In 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), pages 1–5, August 2018. doi: 10.1109/VTCFall.2018.8690657. URL https://ieeexplore.ieee.org/abstract/document/8690657. ISSN: 2577-2465.
  25. End-To-End Learning of Driving Models From Large-Scale Video Datasets. pages 2174–2182, 2017. URL https://openaccess.thecvf.com/content˙cvpr˙2017/html/Xu˙End-To-End˙Learning˙of˙CVPR˙2017˙paper.html.
  26. Lu Chi and Yadong Mu. Learning End-to-End Autonomous Steering Model from Spatial and Temporal Visual Cues. In Proceedings of the Workshop on Visual Analysis in Smart and Connected Communities, VSCC ’17, pages 9–16, New York, NY, USA, October 2017. Association for Computing Machinery. ISBN 9781450355063. doi: 10.1145/3132734.3132737. URL https://doi.org/10.1145/3132734.3132737.
  27. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12(Oct):2825–2830, 2011. ISSN ISSN 1533-7928. URL http://www.jmlr.org/papers/v12/pedregosa11a.html.
  28. DeepAccident: A Motion and Accident Prediction Benchmark for V2X Autonomous Driving, August 2023. URL http://arxiv.org/abs/2304.01168. arXiv:2304.01168 [cs].

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.