Papers
Topics
Authors
Recent
Search
2000 character limit reached

Explainable Interface for Human-Autonomy Teaming: A Survey

Published 4 May 2024 in cs.AI | (2405.02583v1)

Abstract: Nowadays, large-scale foundation models are being increasingly integrated into numerous safety-critical applications, including human-autonomy teaming (HAT) within transportation, medical, and defence domains. Consequently, the inherent 'black-box' nature of these sophisticated deep neural networks heightens the significance of fostering mutual understanding and trust between humans and autonomous systems. To tackle the transparency challenges in HAT, this paper conducts a thoughtful study on the underexplored domain of Explainable Interface (EI) in HAT systems from a human-centric perspective, thereby enriching the existing body of research in Explainable Artificial Intelligence (XAI). We explore the design, development, and evaluation of EI within XAI-enhanced HAT systems. To do so, we first clarify the distinctions between these concepts: EI, explanations and model explainability, aiming to provide researchers and practitioners with a structured understanding. Second, we contribute to a novel framework for EI, addressing the unique challenges in HAT. Last, our summarized evaluation framework for ongoing EI offers a holistic perspective, encompassing model performance, human-centered factors, and group task objectives. Based on extensive surveys across XAI, HAT, psychology, and Human-Computer Interaction (HCI), this review offers multiple novel insights into incorporating XAI into HAT systems and outlines future directions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (305)
  1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
  2. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376615
  3. GPT-4 technical report.
  4. Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv:2004.13912 [cs, stat]
  5. NIST AI. 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0).
  6. CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models. iScience 25, 1 (2022), 103581. https://doi.org/10.1016/j.isci.2021.103581
  7. A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Information Fusion 96 (2023), 156–191. https://doi.org/10.1016/j.inffus.2023.03.008
  8. Explainable Artificial Intelligence (XAI): What We Know and What Is Left to Attain Trustworthy Artificial Intelligence. Information Fusion 99 (2023), 101805. https://doi.org/10.1016/j.inffus.2023.101805
  9. A Study on Multimodal and Interactive Explanations for Visual Question Answering. https://doi.org/10.48550/arXiv.2003.00431 arXiv:2003.00431 [cs]
  10. In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap. Explainable, Transparent Autonomous Agents and Multi-Agent Systems 12175 (2020), 39–53. https://doi.org/10.1007/978-3-030-51924-7_3
  11. To Explain or Not to Explain?—Artificial Intelligence Explainability in Clinical Decision Support Systems. PLOS Digital Health 1, 2 (2022), e0000016. https://doi.org/10.1371/journal.pdig.0000016
  12. The Role of Shared Mental Models in Human-AI Teams: A Theoretical Review. Theoretical Issues in Ergonomics Science 24, 2 (2023), 129–175.
  13. The Value of Standards for Health Datasets in Artificial Intelligence-Based Applications. Nature Medicine 29, 11 (2023), 2929–2938. https://doi.org/10.1038/s41591-023-02608-w
  14. CLEVR-XAI: A Benchmark Dataset for the Ground Truth Evaluation of Neural Network Explanations. Information Fusion 81 (2022), 14–40. https://doi.org/10.1016/j.inffus.2021.11.008
  15. ARTIMATION. 2022. ARTIMATION: Measuring Acceptance and Human Performance in ARTIMATION. https://www.artimation.eu/measuring-acceptance-and-human-performance-in-artimation/. Accessed on 2024-03-31.
  16. Deep Learning in Drug Discovery: An Integrative Review and Future Challenges. Artificial Intelligence Review 56, 7 (2023), 5975–6037. https://doi.org/10.1007/s10462-022-10306-1
  17. Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions. https://doi.org/10.48550/arXiv.2112.11561 arXiv:2112.11561 [cs]
  18. Securing Autonomous Air Traffic Management: Blockchain Networks Driven by Explainable AI. arXiv:2304.14095 [cs]
  19. A vision for human-machine mutual understanding, trust establishment, and collaboration. In 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). IEEE, 1–3.
  20. Neural Machine Translation by Jointly Learning to Align and Translate. https://doi.org/10.48550/arXiv.1409.0473 arXiv:1409.0473 [cs, stat]
  21. Pleasure, Arousal, Dominance: Mehrabian and Russell Revisited. Current Psychology 33, 3 (2014), 405–421. https://doi.org/10.1007/s12144-014-9219-4
  22. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion 58 (2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  23. Kenny Basso and Cristiane Pizzutti. 2016. Trust Recovery Following a Double Deviation. Journal of Service Research 19, 2 (2016), 209–223. https://doi.org/10.1177/1094670515625455
  24. Rethinking Autonomous Surgery: Focusing on Enhancement over Autonomy. European Urology Focus 7, 4 (2021), 696–705. https://doi.org/10.1016/j.euf.2021.06.009
  25. Mahmut Bayazit and Elizabeth A. Mannix. 2003. Should I Stay Or Should I Go? Predicting Team Members’ Intent to Remain in the Team. Small Group Research 34, 3 (2003), 290–321. https://doi.org/10.1177/1046496403034003002
  26. Toward a framework for levels of robot autonomy in human-robot interaction. J. Hum.-Robot Interact. 3, 2 (jul 2014), 74–99. https://doi.org/10.5898/JHRI.3.2.Beer
  27. What would drivers like to know during automated driving? Information needs at different levels of automation.. In 7. Tagung Fahrerassistenzsysteme.
  28. Initial evaluation of the intelligent multi-uxv planner with adaptive collaborative/control technologies (IMPACT). Beavercreek: Infoscitex Corp (2017).
  29. Neuro-Symbolic AI + Agent Systems: A First Reflection on Trends, Opportunities and Challenges. In Autonomous Agents and Multiagent Systems. Best and Visionary Papers, Francesco Amigoni and Arunesh Sinha (Eds.). Springer Nature Switzerland, Cham, 180–200.
  30. Longformer: The Long-Document Transformer. arXiv:2004.05150 [cs]
  31. A Taxonomic Description of Computer-Based Clinical Decision Support Systems. Journal of Biomedical Informatics 39, 6 (2006), 656–667. https://doi.org/10.1016/j.jbi.2005.12.003
  32. How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (Oxford, United Kingdom) (AIES ’22). Association for Computing Machinery, New York, NY, USA, 78–91. https://doi.org/10.1145/3514094.3534164
  33. Evaluating and Aggregating Feature-based Model Explanations. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, Yokohama, Japan, 3016–3022. https://doi.org/10.24963/ijcai.2020/417
  34. Jacob Bien and Robert Tibshirani. 2011. Prototype Selection for Interpretable Classification. The Annals of Applied Statistics 5, 4 (2011), 2403–2424. https://doi.org/10.1214/11-AOAS495
  35. Human - System Cooperation in Automated Driving. International Journal of Human–Computer Interaction 35, 11 (2019), 917–918. https://doi.org/10.1080/10447318.2018.1561793
  36. Susanne Bødker. 1989. A Human Activity Approach to User Interfaces. Human-Computer Interaction 4, 3 (1989), 171–195. https://doi.org/10.1207/s15327051hci0403_1
  37. A Review, Current Challenges, and Future Possibilities on Emotion Recognition Using Machine Learning and Physiological Signals. IEEE access : practical innovations, open solutions 7 (2019), 140990–141020. https://doi.org/10.1109/ACCESS.2019.2944001
  38. Improving Neural Additive Models with Bayesian Principles. https://doi.org/10.48550/arXiv.2305.16905 arXiv:2305.16905 [cs, stat]
  39. BRAID UK. 2024. https://braiduk.org/. Accessed: 2024-04-08.
  40. Defining Team Competencies: Implications for Training Requirements and Strategies. Team effectiveness and decision making in organizations (1995), 333–380.
  41. Do Explanations Make VQA Models More Predictable to a Human? arXiv:1810.12366 [cs]
  42. Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review. npj Digital Medicine 5, 1 (2022), 1–15. https://doi.org/10.1038/s41746-022-00699-2
  43. Jessie YC Chen and Michael J Barnes. 2014. Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues. IEEE Transactions on Human-Machine Systems 44, 1 (2014), 13–29.
  44. Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning. arXiv:1801.04099 [cs]
  45. Generating Long Sequences with Sparse Transformers. arXiv:1904.10509 [cs, stat]
  46. Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications. Information Fusion 81 (2022), 59–83. https://doi.org/10.1016/j.inffus.2021.11.003
  47. On-Road Automated Driving (ORAD) Committee. 2021. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.
  48. Alvaro H. C. Correia and Freddy Lecue. 2019. Human-in-the-Loop Feature Selection. Proceedings of the AAAI Conference on Artificial Intelligence 33, 01 (2019), 2438–2445. https://doi.org/10.1609/aaai.v33i01.33012438
  49. Jonathan Crabbé and Mihaela van der Schaar. 2022a. Concept Activation Regions: A Generalized Framework For Concept-Based Explanations. arXiv:2209.11222 [cs]
  50. Jonathan Crabbé and Mihaela van der Schaar. 2022b. Label-Free Explainability for Unsupervised Models. arXiv:2203.01928 [cs]
  51. Augmenting Team Cognition in Human-Automation Teams Performing in Complex Operational Environments. Aviation, space, and environmental medicine 78, 5 (2007), B63–B70.
  52. A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. arXiv:2204.08570 [cs]
  53. Cinematic Rendering – an Alternative to Volume Rendering for 3D Computed Tomography Imaging. Insights into Imaging 7, 6 (2016), 849–856. https://doi.org/10.1007/s13244-016-0518-1
  54. Exploration of Teammate Trust and Interaction Dynamics in Human-Autonomy Teaming. IEEE Transactions on Human-Machine Systems 51, 6 (2021), 696–705. https://doi.org/10.1109/THMS.2021.3115058
  55. Human-Centered Explainability for Life Sciences, Healthcare, and Medical Informatics. Patterns 3, 5 (2022), 100493. https://doi.org/10.1016/j.patter.2022.100493
  56. Mental Load and Fatigue Assessment Instruments: A Systematic Review. International Journal of Environmental Research and Public Health 19, 1 (2021), 419. https://doi.org/10.3390/ijerph19010419
  57. The Effect of Explanations and Algorithmic Accuracy on Visual Recommender Systems of Artistic Images. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). Association for Computing Machinery, New York, NY, USA, 408–416. https://doi.org/10.1145/3301275.3302274
  58. A Survey on In-context Learning. arXiv:2301.00234 [cs]
  59. Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems. arXiv:2106.03775 [cs]
  60. John J. Dudley and Per Ola Kristensson. 2018. A Review of User Interface Design for Interactive Machine Learning. ACM Transactions on Interactive Intelligent Systems 8, 2 (2018), 1–37. https://doi.org/10.1145/3185517
  61. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. Comput. Surveys 55, 9 (2023), 194:1–194:33. https://doi.org/10.1145/3561048
  62. Expanding Explainability: Towards Social Transparency in AI Systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–19. https://doi.org/10.1145/3411764.3445188
  63. Paul Ekman. 1971. Universals and Cultural Differences in Facial Expressions of Emotion. Nebraska Symposium on Motivation 19 (1971), 207–283.
  64. Driver Distraction Identification with an Ensemble of Convolutional Neural Networks. https://doi.org/10.48550/arXiv.1901.09097 arXiv:1901.09097 [cs, stat]
  65. Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods. In Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE), Bhavana Dalvi Mishra, Greg Durrett, Peter Jansen, Danilo Neves Ribeiro, and Jason Wei (Eds.). Association for Computational Linguistics, Toronto, Canada, 30–46. https://doi.org/10.18653/v1/2023.nlrse-1.4
  66. Jacques Ferber. 1999. Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence (1 ed.). Addison-Wesley Longman Publishing Co., Inc., USA.
  67. Measuring the Quality of Learning in a Human–Robot Collaboration: A Study of Laparoscopic Surgery. ACM Transactions on Human-Robot Interaction (THRI) 11, 3 (2022), 1–20.
  68. Concepts and Trends n Autonomy for Robot-Assisted Surgery. Proceedings of the IEEE. Institute of Electrical and Electronics Engineers 110, 7 (2022), 993–1011. https://doi.org/10.1109/JPROC.2022.3176828
  69. B. J. Fogg. 2003. Prominence-Interpretation Theory: Explaining How People Assess Credibility Online. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’03). Association for Computing Machinery, New York, NY, USA, 722–723. https://doi.org/10.1145/765891.765951
  70. Increasing Anthropomorphism and Trust in Automated Driving Functions by Adding Speech Output. In 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 365–372. https://doi.org/10.1109/IVS.2017.7995746
  71. Digital Twin: Enabling Technologies, Challenges and Open Research. IEEE Access 8 (2020), 108952–108971. https://doi.org/10.1109/ACCESS.2020.2998358
  72. Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning. Comput. Surveys (2024), 37. https://doi.org/10.1145/3644073
  73. Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–28. https://doi.org/10.1145/3555590
  74. Jesse James Garrett. 2011. The Elements of User Experience: User-Centered Design for the Web and Beyond (2nd ed ed.). New Riders, Berkeley, Calif.
  75. Anne Gerdes. 2021. Dialogical Guidelines Aided by Knowledge Acquisition: FTC 2020 - Future Technologies Conference 2020. Proceedings of the future technologies conference (FTC) 2020 1 (2021), 243–257. https://doi.org/10.1007/978-3-030-63128-4_19
  76. Explainable AI, But Explainable to Whom? An Exploratory Case Study of xAI in Healthcare. In Handbook of Artificial Intelligence in Healthcare: Vol 2: Practicalities and Prospects, Chee-Peng Lim, Yen-Wei Chen, Ashlesha Vaidya, Charu Mahorkar, and Lakhmi C. Jain (Eds.). Springer International Publishing, Cham, 169–198. https://doi.org/10.1007/978-3-030-83620-7_7
  77. Bhavya Ghai. 2023. Towards Fair and Explainable AI Using a Human-Centered AI Approach. https://doi.org/10.48550/arXiv.2306.07427 arXiv:2306.07427 [cs]
  78. Bhavya Ghai and Klaus Mueller. 2022. D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias. https://doi.org/10.48550/arXiv.2208.05126 arXiv:2208.05126 [cs, stat]
  79. Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver Gaze Zone Estimation Dataset. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, Montreal, BC, Canada, 2896–2905. https://doi.org/10.1109/ICCVW54120.2021.00324
  80. Autonomous Task Planning and Situation Awareness in Robotic Surgery. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 3144–3150. https://doi.org/10.1109/IROS45743.2020.9341382 arXiv:2004.08911 [cs]
  81. Human-in-the-Loop Optimization of Shared Autonomy in Assistive Robotics. IEEE Robotics and Automation Letters 2, 1 (2017), 247–254. https://doi.org/10.1109/LRA.2016.2593928
  82. Richard Grace and Sonya Steward. 2001. Drowsy Driver Monitor and Warning System. In Driving Assessment Conference, Vol. 1. University of Iowa.
  83. Toward Intelligent Cyber-Physical Systems: Digital Twin Meets Artificial Intelligence. IEEE Communications Magazine 59, 8 (2021), 14–20. https://doi.org/10.1109/MCOM.001.2001237
  84. A Survey of Methods for Explaining Black Box Models. Comput. Surveys 51, 5 (2019), 1–42. https://doi.org/10.1145/3236009
  85. Emotion Representation, Analysis and Synthesis in Continuous Space: A Survey. In 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG). 827–834. https://doi.org/10.1109/FG.2011.5771357
  86. David Gunning and David Aha. 2019. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine 40, 2 (2019), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
  87. DARPA’s Explainable AI (XAI) Program: A Retrospective. Authorea Preprints (2021). https://doi.org/10.22541/au.163699841.19031727/v1
  88. On the Impact of Knowledge Distillation for Model Interpretability. arXiv:2305.15734 [cs]
  89. Interpreting Adversarial Examples in Deep Learning: A Review. Comput. Surveys 55, 14s (2023), 1–38. https://doi.org/10.1145/3594869
  90. Which Explanation Should i Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations. Advances in neural information processing systems 35 (2022), 5256–5268.
  91. Explanations and Expectations: Trust Building in Automated Vehicles. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. ACM, Chicago IL USA, 119–120. https://doi.org/10.1145/3173386.3177057
  92. MIRIAM: A Multimodal Interface for Explaining the Reasoning Behind Actions of Remote Autonomous Systems. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM, Boulder CO USA, 557–558. https://doi.org/10.1145/3242969.3266297
  93. Trevor Hastie and R. Tibshirani. 2014. Generalized Additive Models. In Wiley StatsRef: Statistics Reference Online. John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118445112.stat03141
  94. Center for Devices and Radiological Health. Fri, 10/20/2023 - 12:46. Artificial Intelligence and Machine Learning in Software as a Medical Device. FDA (Fri, 10/20/2023 - 12:46).
  95. Multisensory In-Car Warning Signals for Collision Avoidance. Human Factors 49, 6 (2007), 1107–1114. https://doi.org/10.1518/001872007X249965
  96. Metrics for Explainable AI: Challenges and Prospects. arXiv preprint arXiv:1812.04608 (2018). arXiv:1812.04608
  97. Measures for Explainable AI: Explanation Goodness, User Satisfaction, Mental Models, Curiosity, Trust, and Human-AI Performance. Frontiers in Computer Science 5 (2023), 1096257. https://doi.org/10.3389/fcomp.2023.1096257
  98. Towards Multi-Modal Causability with Graph Neural Networks Enabling Information Fusion for Explainable AI. Information Fusion 71 (2021), 28–37. https://doi.org/10.1016/j.inffus.2021.01.008
  99. Andreas Holzinger and Heimo Müller. 2021. Toward Human–AI Interfaces to Support Explainability and Causability in Medical AI. Computer 54, 10 (2021), 78–86. https://doi.org/10.1109/MC.2021.3092610
  100. MetaGPT: Meta Programming for a Multi-Agent Collaborative Framework. arXiv:2308.00352 [cs.AI]
  101. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. https://doi.org/10.1145/3392878 arXiv:2004.11440 [cs]
  102. Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles. IEEE Transactions on Intelligent Vehicles 7, 3 (2022), 417–440. https://doi.org/10.1109/TIV.2022.3195635
  103. Data-Driven Estimation of Driver Attention Using Calibration-Free Eye Gaze and Scene Features. IEEE Transactions on Industrial Electronics 69, 2 (2022), 1800–1808. https://doi.org/10.1109/TIE.2021.3057033
  104. SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability. https://doi.org/10.48550/arXiv.2208.09418 arXiv:2208.09418 [cs]
  105. GANterfactual-RL: Understanding Reinforcement Learning Agents’ Strategies through Visual Counterfactual Explanations. https://doi.org/10.48550/arXiv.2302.12689 arXiv:2302.12689 [cs]
  106. Rami Ibrahim and M. Omair Shafiq. 2023. Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions. Comput. Surveys 55, 10 (2023), 1–37. https://doi.org/10.1145/3563691
  107. Towards Benchmarking the Utility of Explanations for Model Debugging. arXiv:2105.04505 [cs]
  108. Visualizing Variable Importance and Variable Interaction Effects in Machine Learning Models. arXiv:2108.04310 [stat]
  109. Improving Deep Learning Interpretability by Saliency Guided Training. https://doi.org/10.48550/arXiv.2111.14338 arXiv:2111.14338 [cs]
  110. Makoto Itoh and Toshiyuki Inagaki. 2014. Design and Evaluation of Steering Protection for Avoiding Collisions during a Lane Change. Ergonomics 57, 3 (2014), 361–373. https://doi.org/10.1080/00140139.2013.848474
  111. A Survey of Explainable Artificial Intelligence for Smart Cities. Electronics 12, 4 (2023), 1020. https://doi.org/10.3390/electronics12041020
  112. Myounghoon Jeon. 2023. The Effects of Emotions on Trust in Human-Computer Interaction: A Survey and Prospect. International Journal of Human–Computer Interaction 0, 0 (2023), 1–19. https://doi.org/10.1080/10447318.2023.2261727
  113. Sumit Jha and Carlos Busso. 2023. Estimation of Driver’s Gaze Region from Head Position and Orientation Using Probabilistic Confidence Regions. IEEE Transactions on Intelligent Vehicles 8, 1 (2023), 59–72. https://doi.org/10.1109/TIV.2022.3141071
  114. Large Models for Time Series and Spatio-Temporal Data: A Survey and Outlook. https://doi.org/10.48550/arXiv.2310.10196 arXiv:2310.10196 [cs]
  115. Invisible Users: Uncovering End-Users’ Requirements for Explainable AI via Explanation Forms and Goals. https://doi.org/10.48550/arXiv.2302.06609 arXiv:2302.06609 [cs]
  116. The Impact of Different AR-HUD Virtual Warning Interfaces on the Takeover Performance and Visual Characteristics of Autonomous Vehicles. Traffic Injury Prevention 23, 5 (2022), 277–282. https://doi.org/10.1080/15389588.2022.2055752
  117. A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations. Comput. Surveys 55, 5 (2022), 95:1–95:29. https://doi.org/10.1145/3527848
  118. Driver Distraction Detection Methods: A Literature Review and Framework. IEEE access : practical innovations, open solutions 9 (2021), 60063–60076. https://doi.org/10.1109/ACCESS.2021.3073599
  119. Trustworthy Artificial Intelligence: A Review. ACM Computing Surveys (CSUR) 55, 2 (2022), 1–38.
  120. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques. https://doi.org/10.48550/arXiv.2103.01035 arXiv:2103.01035 [cs]
  121. Post-Hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective. In Pattern Recognition. ICPR International Workshops and Challenges, Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani (Eds.). Springer International Publishing, Cham, 20–34. https://doi.org/10.1007/978-3-030-68796-0_2
  122. Towards interpretable deep reinforcement learning with human-friendly prototypes. In The Eleventh International Conference on Learning Representations.
  123. Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability. In Advances in Neural Information Processing Systems, Vol. 29. Curran Associates, Inc.
  124. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). https://doi.org/10.48550/arXiv.1711.11279 arXiv:1711.11279 [stat]
  125. What and When to Explain? On-road Evaluation of Explanations in Highly Automated Vehicles. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 3 (2023), 104:1–104:26. https://doi.org/10.1145/3610886
  126. Do Stakeholder Needs Differ? - Designing Stakeholder-Tailored Explainable Artificial Intelligence (XAI) Interfaces. International Journal of Human-Computer Studies 181 (2024), 103160. https://doi.org/10.1016/j.ijhcs.2023.103160
  127. ”Help Me Help the AI”: Understanding How Explainability Can Support Human-AI Interaction. https://doi.org/10.1145/3544548.3581001 arXiv:2210.03735 [cs]
  128. Ten Challenges for Making Automation a ”Team Player” in Joint Human-Agent Activity. IEEE Intelligent Systems 19, 6 (2004), 91–95. https://doi.org/10.1109/MIS.2004.74
  129. Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life. https://doi.org/10.48550/arXiv.2301.06676 arXiv:2301.06676 [cs, stat]
  130. Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Virtual Event USA, 652–663. https://doi.org/10.1145/3461702.3462597
  131. Explaining deep reinforcement learning decisions in complex multiagent settings: towards enabling automation in air traffic flow management. Applied Intelligence 53, 4 (2023), 4063–4098.
  132. The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective. https://doi.org/10.48550/arXiv.2202.01602 arXiv:2202.01602 [cs]
  133. Are Large Language Models Post Hoc Explainers? arXiv:2310.05797 [cs]
  134. A Human Factors Perspective on Automated Driving. Theoretical Issues in Ergonomics Science 20, 3 (2019), 223–249. https://doi.org/10.1080/1463922X.2017.1293187
  135. Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 486–495. https://doi.org/10.18653/v1/D19-1046
  136. What Do We Want from Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research. Artificial Intelligence 296 (2021), 103473. https://doi.org/10.1016/j.artint.2021.103473
  137. Michael T. Lash. 2022. HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning. arXiv:2206.01343 [cs, stat]
  138. John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors 46, 1 (2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  139. Investigating Effects of Multimodal Explanations Using Multiple In-vehicle Displays for Takeover Request in Conditionally Automated Driving. Transportation Research Part F: Traffic Psychology and Behaviour 96 (2023), 1–22. https://doi.org/10.1016/j.trf.2023.05.014
  140. Managing User Trust for Self-Adaptive Ubiquitous Computing Systems. In Proceedings of the 8th International Conference on Advances in Mobile Computing and Multimedia (MoMM ’10). Association for Computing Machinery, New York, NY, USA, 409–414. https://doi.org/10.1145/1971519.1971589
  141. Emotion and Decision Making. Annual Review of Psychology 66, 1 (2015), 799–823. https://doi.org/10.1146/annurev-psych-010213-115043
  142. Trustworthy AI: From Principles to Practices. Comput. Surveys 55, 9 (2023), 177:1–177:46. https://doi.org/10.1145/3555803
  143. Proactive Mental Fatigue Detection of Traffic Control Operators Using Bagged Trees and Gaze-Bin Analysis. Advanced Engineering Informatics 42 (2019), 100987. https://doi.org/10.1016/j.aei.2019.100987
  144. Estimating Trust in Conversational Agent with Lexical and Acoustic Features. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66. SAGE Publications Sage CA: Los Angeles, CA, 544–548.
  145. Xiao Li and Thenkurussi Kesavadas. 2018. Surgical Robot with Environment Reconstruction and Force Feedback. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 1861–1866. https://doi.org/10.1109/EMBC.2018.8512695
  146. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–15. https://doi.org/10.1145/3313831.3376590
  147. Robotic-Assisted Pedicle Screw Placement During Spine Surgery. JBJS Essential Surgical Techniques 10, 2 (2020), e0020. https://doi.org/10.2106/JBJS.ST.19.00020
  148. Wei-Yin Loh. 2011. Classification and Regression Trees. Wiley interdisciplinary reviews: data mining and knowledge discovery 1, 1 (2011), 14–23.
  149. From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence 2, 1 (2020), 56–67. https://doi.org/10.1038/s42256-019-0138-9
  150. When Causal Inference Meets Deep Learning. Nature Machine Intelligence 2, 8 (2020), 426–427. https://doi.org/10.1038/s42256-020-0218-x
  151. Effective Approaches to Attention-based Neural Machine Translation. arXiv:1508.04025 [cs]
  152. Exploring trust barriers to future autonomy: a qualitative look. In Advances in Human Factors in Simulation and Modeling: Proceedings of the AHFE 2017 International Conference on Human Factors in Simulation and Modeling, July 17–21, 2017, The Westin Bonaventure Hotel, Los Angeles, California, USA 8. Springer, 3–11.
  153. Human–Autonomy Teaming: Definitions, Debates, and Directions. Frontiers in Psychology 12 (2021).
  154. Progressive Mimic Learning: A New Perspective to Train Lightweight CNN Models. Neurocomputing 456 (2021), 220–231. https://doi.org/10.1016/j.neucom.2021.04.086
  155. A. V. Shreyas Madhav and Amit Kumar Tyagi. 2023. Explainable Artificial Intelligence (XAI): Connecting Artificial Decision-Making and Human Trust in Autonomous Vehicles. In Proceedings of Third International Conference on Computing, Communications, and Cyber-Security (Lecture Notes in Networks and Systems), Pradeep Kumar Singh, Sławomir T. Wierzchoń, Sudeep Tanwar, Joel J. P. C. Rodrigues, and Maria Ganzha (Eds.). Springer Nature, Singapore, 123–136. https://doi.org/10.1007/978-981-19-1142-2_10
  156. A Haptic Feedback Driver-Vehicle Interface for Controlling Lateral and Longitudinal Motions of Autonomous Vehicles. In 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM). 119–124. https://doi.org/10.1109/AIM.2016.7576753
  157. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision. arXiv:1904.12584 [cs]
  158. The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics 113 (2021), 103655. https://doi.org/10.1016/j.jbi.2020.103655
  159. David Alvarez Melis and Tommi Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. In Advances in Neural Information Processing Systems, Vol. 31. Curran Associates, Inc.
  160. Fanxing Meng and Charles Spence. 2015. Tactile Warning Signals for In-Vehicle Systems. Accident; Analysis and Prevention 75 (2015), 333–346. https://doi.org/10.1016/j.aap.2014.12.013
  161. Christian Meske and Enrico Bunde. 2022. Design Principles for User Interfaces in AI-Based Decision Support Systems: The Case of Explainable Hate Speech Detection. Information Systems Frontiers (2022). https://doi.org/10.1007/s10796-021-10234-5
  162. A survey on deep learning and explainability for automatic report generation from medical images. ACM Computing Surveys (CSUR) 54, 10s (2022), 1–40.
  163. Who’s in Charge Here? A Survey on Trustworthy AI in Variable Autonomy Robotic Systems. Comput. Surveys (2024), 32. https://doi.org/10.1145/3645090
  164. John A. Michon. 1985. A Critical View of Driver Behavior Models: What Do We Know, What Should We Do? In Human Behavior and Traffic Safety, Leonard Evans and Richard C. Schwing (Eds.). Springer US, Boston, MA, 485–524. https://doi.org/10.1007/978-1-4613-2173-6_19
  165. Explainable Reinforcement Learning: A Survey and Comparative Review. Comput. Surveys (2023), 1–35. https://doi.org/10.1145/3616864
  166. Explainable Artificial Intelligence: A Comprehensive Review. Artificial Intelligence Review 55, 5 (2022), 3503–3568. https://doi.org/10.1007/s10462-021-10088-y
  167. Cognition-Cognizant Sentiment Analysis With Multitask Subjectivity Summarization Based on Annotators’ Gaze Behavior. Proceedings of the AAAI Conference on Artificial Intelligence 32, 1 (2018). https://doi.org/10.1609/aaai.v32i1.12068
  168. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3–4 (2021), 1–45. https://doi.org/10.1145/3387166
  169. Chris D. Monaco and Sean N. Brennan. 2020. RADARODO: Ego-Motion Estimation From Doppler and Spatial Data in RADAR Images. IEEE Transactions on Intelligent Vehicles 5, 3 (2020), 475–484. https://doi.org/10.1109/TIV.2020.2973536
  170. Arthur G. Money and Harry Agius. 2009. Analysing User Physiological Responses for Affective Video Summarisation. Displays 30, 2 (2009), 59–70. https://doi.org/10.1016/j.displa.2008.12.003
  171. Transformer Debugger.
  172. Richard Mulgan. 2000. ‘Accountability’: An Ever-Expanding Concept? Public Administration 78, 3 (2000), 555–573. https://doi.org/10.1111/1467-9299.00218
  173. Updating Our Understanding of Situation Awareness in Relation to Remote Operators of Autonomous Vehicles. Cognitive Research: Principles and Implications 6, 1 (2021), 9. https://doi.org/10.1186/s41235-021-00271-8
  174. Improving Usefulness of Automated Driving by Lowering Primary Task Interference through HMI Design. Journal of Advanced Transportation 2017 (2017), e6105087. https://doi.org/10.1155/2017/6105087
  175. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. Comput. Surveys 55, 13s (2023), 1–42. https://doi.org/10.1145/3583558
  176. Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance. In Engineering Psychology and Cognitive Ergonomics (Lecture Notes in Computer Science), Don Harris (Ed.). Springer International Publishing, Cham, 204–214. https://doi.org/10.1007/978-3-319-91122-9_18
  177. Catherine Neubauer. 2023. HAT3: The Human Autonomy Team Trust Toolkit. In Companion Publication of the 25th International Conference on Multimodal Interaction (¡conf-loc¿, ¡city¿Paris¡/city¿, ¡country¿France¡/country¿, ¡/conf-loc¿) (ICMI ’23 Companion). Association for Computing Machinery, New York, NY, USA, 115–118. https://doi.org/10.1145/3610661.3620660
  178. TwinExplainer: Explaining Predictions of an Automotive Digital Twin. arXiv:2302.00152 [cs]
  179. Kazuo Okamura and Seiji Yamada. 2020. Adaptive Trust Calibration for Human-AI Collaboration. Plos one 15, 2 (2020), e0229132.
  180. Evaluating How Interfaces Influence the User Interaction with Fully Autonomous Vehicles. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM Association for Computing Machinery. https://doi.org/10.1145/3239060.3239065
  181. Explanations in Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems 23, 8 (2022), 10142–10162. https://doi.org/10.1109/tits.2021.3122865
  182. Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature. Human Factors 64, 5 (2022), 904–938. https://doi.org/10.1177/0018720820960865
  183. Legal, Regulatory, and Ethical Frameworks for Development of Standards in Artificial Intelligence (AI) and Autonomous Robotic Surgery. The international journal of medical robotics + computer assisted surgery: MRCAS 15, 1 (2019), e1968. https://doi.org/10.1002/rcs.1968
  184. Sharon Oviatt and Philip Cohen. 2015. The Paradigm Shift to Multimodality in Contemporary Computer Interfaces. Morgan & Claypool Publishers. https://doi.org/10.2200/S00636ED1V01Y201503HCI030
  185. Transparency in Autonomous Teammates: Intention to Support as Teaming Information. Journal of Cognitive Engineering and Decision Making 14, 2 (2020), 174–190. https://doi.org/10.1177/1555343419881563
  186. Co-Design of Human-Centered, Explainable AI for Clinical Decision Support. ACM Transactions on Interactive Intelligent Systems (2023), 3587271. https://doi.org/10.1145/3587271
  187. Understanding the Impact of Explanations on Advice-Taking: A User Study for AI-based Clinical Decision Support Systems. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–9. https://doi.org/10.1145/3491102.3502104
  188. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, UT, 8779–8788. https://doi.org/10.1109/CVPR.2018.00915
  189. Challenges in Digital Twin Development for Cyber-Physical Production Systems. , 28–48 pages. https://doi.org/10.1007/978-3-030-23703-5_2 arXiv:2102.03341 [cs]
  190. Impact of Data Visualization on Decision-Making and Its Implications for Public Health Practice: A Systematic Literature Review. Informatics for Health & Social Care 47, 2 (2022), 175–193. https://doi.org/10.1080/17538157.2021.1982949
  191. Learning Saliency Maps to Explain Deep Time Series Classifiers. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM ’21). Association for Computing Machinery, New York, NY, USA, 1406–1415. https://doi.org/10.1145/3459637.3482446
  192. Information Olfactation: Harnessing Scent to Convey Data. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 726–736. https://doi.org/10.1109/TVCG.2018.2865237
  193. Explainable Deep Learning Methods in Medical Image Classification: A Survey. Comput. Surveys 56, 4 (2023), 1–41. https://doi.org/10.1145/3625287
  194. Judea Pearl. 2009. Causal Inference in Statistics: An Overview. Statistics Surveys 3, none (2009). https://doi.org/10.1214/09-SS057
  195. Judea Pearl and Dana Mackenzie. 2018. The Book of Why: The New Science of Cause and Effect. Basic books.
  196. Artificial Intelligence as a Medical Device in Radiology: Ethical and Regulatory Issues in Europe and the United States. Insights into Imaging 9, 5 (2018), 745–753. https://doi.org/10.1007/s13244-018-0645-y
  197. RISE: Randomized Input Sampling for Explanation of Black-box Models. arXiv:1806.07421 [cs]
  198. Explainability in Medicine in an Era of AI-based Clinical Decision Support Systems. Frontiers in Genetics 13 (2022), 903600. https://doi.org/10.3389/fgene.2022.903600
  199. Robert Plutchik. 2003. Emotions and Life: Perspectives from Psychology, Biology, and Evolution. American Psychological Association, Washington, DC, US. xix, 381 pages.
  200. A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges. Comput. Surveys (2023), 1–35. https://doi.org/10.1145/3618105
  201. Causal Inference and Counterfactual Prediction in Machine Learning for Actionable Healthcare. Nature Machine Intelligence 2, 7 (2020), 369–375. https://doi.org/10.1038/s42256-020-0197-y
  202. First Workshop on Adaptive and Personalized Explainable User Interfaces (Apex-Ui 2022). In 27th International Conference on Intelligent User Interfaces. 1–3.
  203. Interactive Explanations by Conflict Resolution via Argumentative Exchanges. https://doi.org/10.48550/arXiv.2303.15022 arXiv:2303.15022 [cs]
  204. Towards a Robot-Based Multimodal Framework to Assess the Impact of Fatigue on User Behavior and Performance: A Pilot Study. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA ’19). Association for Computing Machinery, New York, NY, USA, 493–498. https://doi.org/10.1145/3316782.3322776
  205. Artificial Intelligence-Based Clinical Decision Support in Pediatrics. Pediatric Research 93, 2 (2023), 334–341. https://doi.org/10.1038/s41390-022-02226-1
  206. TsSHAP: Robust Model Agnostic Feature-Based Explainability for Time Series Forecasting. https://doi.org/10.48550/ARXIV.2303.12316
  207. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. https://doi.org/10.48550/arXiv.1602.04938 arXiv:1602.04938 [cs, stat]
  208. Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence 32, 1 (2018). https://doi.org/10.1609/aaai.v32i1.11491
  209. Rafael F. Ribeiro and Paula D. P. Costa. 2019. Driver Gaze Zone Dataset With Depth Data. In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019). 1–5. https://doi.org/10.1109/FG.2019.8756592
  210. Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. ACM, Boulder CO USA, 384–392. https://doi.org/10.1145/3242969.3242974
  211. Avi Rosenfeld. 2021. Better Metrics for Evaluating Explainable Artificial Intelligence. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems. 45–50.
  212. Avi Rosenfeld and Ariella Richardson. 2019. Explainability in Human–Agent Systems. Autonomous Agents and Multi-Agent Systems 33, 6 (2019), 673–705. https://doi.org/10.1007/s10458-019-09408-y
  213. Knowledge Graph-Based Rich and Confidentiality Preserving Explainable Artificial Intelligence (XAI). Information Fusion 81 (2022), 91–102. https://doi.org/10.1016/j.inffus.2021.11.015
  214. Cynthia Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature machine intelligence 1, 5 (2019), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  215. Explainable Goal-driven Agents and Robots - A Comprehensive Review. Comput. Surveys 55, 10 (2023), 211:1–211:41. https://doi.org/10.1145/3564240
  216. Situation Awareness Measurement: A Review of Applicability for C4i Environments. Applied Ergonomics 37, 2 (2006), 225–238. https://doi.org/10.1016/j.apergo.2005.02.001
  217. Kevin Joel Salubre and Dan Nathan-Roberts. 2021. Takeover Request Design in Automated Driving: A Systematic Review. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, 1 (2021), 868–872. https://doi.org/10.1177/1071181321651296
  218. Lindsay Sanneman. 2023. Transparent Value Alignment: Foundations for Human-Centered Explainable AI in Alignment. Thesis. Massachusetts Institute of Technology.
  219. Lindsay Sanneman and Julie A. Shah. 2022. The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems. International Journal of Human–Computer Interaction 38, 18-20 (2022), 1772–1788. https://doi.org/10.1080/10447318.2022.2081282
  220. Towards Meaningfully Integrating Human-Autonomy Teaming in Applied Settings. In Proceedings of the 8th International Conference on Human-Agent Interaction. ACM, Virtual Event USA, 149–156. https://doi.org/10.1145/3406499.3415077
  221. Increasing the User Experience in Autonomous Driving through Different Feedback Modalities. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 7–10. https://doi.org/10.1145/3397481.3450687
  222. Towards Causal Representation Learning. https://arxiv.org/abs/2102.11107v1.
  223. Trusting the X in XAI: Effects of Different Types of Explanations by a Self-Driving Car on Trust, Explanation Satisfaction and Mental Models. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, 1 (2020), 339–343. https://doi.org/10.1177/1071181320641077
  224. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In 2017 IEEE International Conference on Computer Vision (ICCV). 618–626. https://doi.org/10.1109/ICCV.2017.74
  225. Machine Learning and Physics: A Survey of Integrated Models. Comput. Surveys 56, 5 (2023), 115:1–115:33. https://doi.org/10.1145/3611383
  226. Knowledge-Intensive Language Understanding for Explainable AI. https://doi.org/10.48550/arXiv.2108.01174 arXiv:2108.01174 [cs]
  227. Learning Important Features Through Propagating Activation Differences. arXiv:1704.02685 [cs]
  228. Opportunities for explainable artificial intelligence in aerospace predictive maintenance. In PHM Society European Conference, Vol. 5. 11–11.
  229. Keng Siau and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter Business Technology Journal 31 (2018), 47–53.
  230. Review of Surgical Robotics User Interface: What Is the Best Way to Control Robotic Surgery? Surgical Endoscopy 26, 8 (2012), 2117–2125. https://doi.org/10.1007/s00464-012-2182-y
  231. Fooling LIME and SHAP: Adversarial Attacks on Post Hoc Explanation Methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20). Association for Computing Machinery, New York, NY, USA, 180–186. https://doi.org/10.1145/3375627.3375830
  232. Decision Trees with Short Explainable Rules. In Advances in Neural Information Processing Systems.
  233. Timo Speith. 2022. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 2239–2250. https://doi.org/10.1145/3531146.3534639
  234. Improving Teamwork Competencies in Human-Machine Teams: Perspectives From Team Science. Frontiers in Psychology 12 (2021), 590290. https://doi.org/10.3389/fpsyg.2021.590290
  235. ProtoAI: Model-Informed Prototyping for AI-Powered Interfaces. In 26th International Conference on Intelligent User Interfaces (IUI ’21). Association for Computing Machinery, New York, NY, USA, 48–58. https://doi.org/10.1145/3397481.3450640
  236. Agus Sudjianto and Aijun Zhang. 2021. Designing Inherently Interpretable Machine Learning Models. https://doi.org/10.48550/arXiv.2111.01743 arXiv:2111.01743 [cs, stat]
  237. Improving Explainable AI with Patch Perturbation-Based Evaluation Pipeline: A COVID-19 X-ray Image Analysis Case Study. Scientific Reports 13, 1 (2023), 19488. https://doi.org/10.1038/s41598-023-46493-2
  238. Axiomatic Attribution for Deep Networks. arXiv:1703.01365 [cs]
  239. Visual, Textual or Hybrid: The Effect of User Expertise on Different Explanations. In 26th International Conference on Intelligent User Interfaces (IUI ’21). Association for Computing Machinery, New York, NY, USA, 109–119. https://doi.org/10.1145/3397481.3450662
  240. Explaining Health Recommendations to Lay Users: The Dos and Dont’s. In Joint Proceedings of the IUI 2022 Workshops: APEx-UI, HAI-GEN, HEALTHI, HUMANIZE, TExSS, SOCIALIZE Co-Located with the ACM International Conference on Intelligent User Interfaces (IUI 2022). CEUR Workshop Proceedings, 1–10.
  241. Karim A. Tahboub. 2006. Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition. Journal of Intelligent and Robotic Systems 45, 1 (2006), 31–52. https://doi.org/10.1007/s10846-005-9018-0
  242. Yifan Tang and Yan Xu. 2021. Multi-Agent Deep Reinforcement Learning for Solving Large-scale Air Traffic Flow Management Problem: A Time-Step Sequential Decision Approach. In 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC). IEEE, San Antonio, TX, USA, 1–10. https://doi.org/10.1109/DASC52595.2021.9594329
  243. Digital Twin-Driven Product Design, Manufacturing and Service with Big Data. The International Journal of Advanced Manufacturing Technology 94, 9 (2018), 3563–3576. https://doi.org/10.1007/s00170-017-0233-1
  244. TAPAS. 2022. https://tapas-atm.eu/. Accessed on 2024-03-31.
  245. Sule Tekkesinoglu. 2024. Exploring Evaluation Methodologies for Explainable AI: Guidelines for Objective and Subjective Assessment. SSRN Electronic Journal (2024). https://doi.org/10.2139/ssrn.4667052
  246. Leveraging Explanations in Interactive Machine Learning: An Overview. Frontiers in Artificial Intelligence 6 (2023). https://doi.org/10.3389/frai.2023.1066049
  247. Guidelines and Regulatory Framework for Machine Learning in Aviation. In AIAA SCITECH 2022 Forum. American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2022-1132
  248. Effects of Adaptive Robot Dialogue on Information Exchange and Social Relations. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction. 126–133.
  249. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv preprint arXiv:2307.09288 (2023). arXiv:2307.09288
  250. Evaluation of Post-Hoc Interpretability Methods in Time-Series Classification. Nature Machine Intelligence 5, 3 (2023), 250–260. https://doi.org/10.1038/s42256-023-00620-w
  251. Deciphering Diagnoses: How Large Language Models Explanations Influence Clinical Decision Making. arXiv:2310.01708 [cs]
  252. An Explainable Artificial Intelligence System for Small-unit Tactical Behavior.
  253. Attention Is All You Need. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc.
  254. Meaningful human control and variable autonomy in human-robot teams for firefighting. Frontiers in Robotics and AI 11 (2024), 1323980.
  255. Giulia Vilone and Luca Longo. 2021a. Classification of Explainable Artificial Intelligence Methods through Their Output Formats. Machine Learning and Knowledge Extraction 3, 3 (2021), 615–661. https://doi.org/10.3390/make3030032
  256. Giulia Vilone and Luca Longo. 2021b. Notions of Explainability and Evaluation Approaches for Explainable Artificial Intelligence. Information Fusion 76 (2021), 89–106. https://doi.org/10.1016/j.inffus.2021.05.009
  257. Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10, 3152676 (2017), 10–5555.
  258. N. K. Vøllestad. 1997. Measurement of Human Muscle Fatigue. Journal of Neuroscience Methods 74, 2 (1997), 219–227. https://doi.org/10.1016/s0165-0270(97)02251-6
  259. George A. Vouros. 2022. Explainable Deep Reinforcement Learning: State of the Art and Challenges. Comput. Surveys 55, 5 (2022), 92:1–92:39. https://doi.org/10.1145/3527448
  260. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. SSRN Electronic Journal (2017). https://doi.org/10.2139/ssrn.3063289
  261. The Effect of Infographics on Recall of Information about Genetically Modified Foods. Journal of Agricultural Education 61, 3 (2020), 22–37. https://doi.org/10.5032/jae.2020.03022
  262. Team Structure and Team Building Improve Human–Machine Teaming With Autonomous Agents. Journal of Cognitive Engineering and Decision Making 13, 4 (2019), 258–278. https://doi.org/10.1177/1555343419867563
  263. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831
  264. Human-Centered Design and Evaluation of AI-empowered Clinical Decision Support Systems: A Systematic Review. Frontiers in Computer Science 5 (2023).
  265. A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization. IEEE Transactions on Visualization and Computer Graphics 28, 12 (2022), 5134–5153. https://doi.org/10.1109/TVCG.2021.3106142 arXiv:2012.00467 [cs]
  266. A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances. arXiv:2203.06935 [cs]
  267. Aiden Warren and Alek Hillas. 2020. Friend or Frenemy? The Role of Trust in Human-Machine Teaming and Lethal Autonomous Weapons Systems. Small Wars & Insurgencies 31, 4 (2020), 822–850. https://doi.org/10.1080/09592318.2020.1743485
  268. Beyond Explaining: Opportunities and Challenges of XAI-based Model Improvement. Information Fusion 92 (2023), 154–176. https://doi.org/10.1016/j.inffus.2022.11.013
  269. Explainable Online Lane Change Predictions on a Digital Twin with a Layer Normalized LSTM and Layer-wise Relevance Propagation. https://doi.org/10.48550/arXiv.2204.01292 arXiv:2204.01292 [cs]
  270. Alexandra Weidemann and Nele Rußwinkel. 2021. The Role of Frustration in Human–Robot Interaction – What Is Needed for a Successful Collaboration? Frontiers in Psychology 12 (2021), 640186. https://doi.org/10.3389/fpsyg.2021.640186
  271. “Let Me Explain!”: Exploring the Potential of Virtual Agents in Explainable AI Interaction Design. Journal on Multimodal User Interfaces 15, 2 (2021), 87–98. https://doi.org/10.1007/s12193-020-00332-0
  272. Lindsay Wells and Tomasz Bednarz. 2021. Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends. Frontiers in Artificial Intelligence 4 (2021). https://doi.org/10.3389/frai.2021.550030
  273. Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, Virtual Event DC USA, 252–261. https://doi.org/10.1145/3409120.3410659
  274. Best Practices in Clinical Decision Support. Applied Clinical Informatics 1, 3 (2010), 331–345. https://doi.org/10.4338/ACI-2010-05-RA-0031
  275. Agent Transparency and Reliability in Human–Robot Interaction: The Influence on User Confidence and Perceived Reliability. IEEE Transactions on Human-Machine Systems 50, 3 (2020), 254–263. https://doi.org/10.1109/THMS.2019.2925717
  276. A Causality Inspired Framework for Model Interpretation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’23). Association for Computing Machinery, New York, NY, USA, 2731–2741. https://doi.org/10.1145/3580305.3599240
  277. Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era. arXiv:2403.08946 [cs]
  278. Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic. arXiv:2109.08927 [cs]
  279. Assessing the Communication Gap between AI Models and Healthcare Professionals: Explainability, Utility and Trust in AI-driven Clinical Decision-Making. Artificial Intelligence 316 (2023), 103839. https://doi.org/10.1016/j.artint.2022.103839
  280. Physics-Constrained Automatic Feature Engineering for Predictive Modeling in Materials Science. Proceedings of the AAAI Conference on Artificial Intelligence 35, 12 (2021), 10414–10421. https://doi.org/10.1609/aaai.v35i12.17247
  281. Outlining the Design Space of Explainable Intelligent Systems for Medical Diagnosis. arXiv preprint arXiv:1902.06019 (2019). arXiv:1902.06019
  282. Learning from the Dark Side: A Parallel Time Series Modelling Framework for Forecasting and Fault Detection on Intelligent Vehicles. IEEE Transactions on Intelligent Vehicles (2023), 1–15. https://doi.org/10.1109/TIV.2023.3342648
  283. Driver Steering Behaviour Modelling Based on Neuromuscular Dynamics and Multi-Task Time-Series Transformer. Automotive Innovation 7, 1 (2024), 45–58. https://doi.org/10.1007/s42154-023-00272-x
  284. Advanced Driver Intention Inference: Theory and Design. Elsevier.
  285. Toward Human-Vehicle Collaboration: Review and Perspectives on Human-Centered Collaborative Automated Driving. Transportation Research Part C: Emerging Technologies 128 (2021), 103199. https://doi.org/10.1016/j.trc.2021.103199
  286. XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques. arXiv:2402.12685 [cs]
  287. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. arXiv:1502.03044 [cs]
  288. Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity. https://doi.org/10.48550/arXiv.2202.12482 arXiv:2202.12482 [cs, math, stat]
  289. Wei Xu and Marvin Dainoff. 2023. Enabling Human-Centered AI: A New Junction and Shared Journey between AI and HCI Communities. https://doi.org/10.48550/arXiv.2111.08460 arXiv:2111.08460 [cs]
  290. XAIR: A Framework of Explainable AI in Augmented Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–30. https://doi.org/10.1145/3544548.3581500
  291. Medical Robotics—Regulatory, Ethical, and Legal Considerations for Increasing Levels of Autonomy. Science Robotics 2, 4 (2017), eaam8638. https://doi.org/10.1126/scirobotics.aam8638
  292. Harnessing Biomedical Literature to Calibrate Clinicians’ Trust in AI Decision Support Systems. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3544548.3581393
  293. What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes. https://doi.org/10.48550/arXiv.2011.05064 arXiv:2011.05064 [cs, stat]
  294. Domain Knowledge Guided Deep Learning with Electronic Health Records. In 2019 IEEE International Conference on Data Mining (ICDM). IEEE, Beijing, China, 738–747. https://doi.org/10.1109/ICDM.2019.00084
  295. White-Box Transformers via Sparse Rate Reduction. https://doi.org/10.48550/arXiv.2306.01129 arXiv:2306.01129 [cs]
  296. In Situ Bidirectional Human-Robot Value Alignment. Science Robotics 7, 68 (2022), eabm4183. https://doi.org/10.1126/scirobotics.abm4183 arXiv:https://www.science.org/doi/pdf/10.1126/scirobotics.abm4183
  297. Hanna Yun and Ji Hyun Yang. 2020. Multimodal Warning Design for Take-over Request in Conditionally Automated Driving. European Transport Research Review 12, 1 (2020), 34. https://doi.org/10.1186/s12544-020-00427-5
  298. Big Bird: Transformers for Longer Sequences. arXiv:2007.14062 [cs, stat]
  299. Surgical Gesture Recognition Based on Bidirectional Multi-Layer Independently RNN with Explainable Spatial Feature Extraction. https://doi.org/10.48550/arXiv.2105.00460 arXiv:2105.00460 [cs]
  300. Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors. https://doi.org/10.48550/arXiv.2006.15417 arXiv:2006.15417 [cs]
  301. Explaining Agent Behavior with Large Language Models. arXiv:2309.10346 [cs]
  302. How Causal Information Affects Decisions. Cognitive Research: Principles and Implications 5, 1 (2020), 6. https://doi.org/10.1186/s41235-020-0206-z
  303. On the Robustness of Post-hoc GNN Explainers to Label Noise. https://doi.org/10.48550/arXiv.2309.01706 arXiv:2309.01706 [cs]
  304. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In 2018 IEEE conference on computational intelligence and games (CIG). IEEE, 1–8.
  305. Julia El Zini and Mariette Awad. 2022. On the Explainability of Natural Language Processing Deep Models. Comput. Surveys 55, 5 (2022), 1–31. https://doi.org/10.1145/3529755 arXiv:2210.06929 [cs]
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.