Papers
Topics
Authors
Recent
Search
2000 character limit reached

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Published 2 Oct 2022 in cs.HC, cs.AI, cs.CV, and cs.CY | (2210.03735v2)

Abstract: Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.

Citations (81)

Summary

  • The paper demonstrates that tailored, human-like XAI explanations effectively address end-user needs in practical settings.
  • The study employed a mixed-methods approach with 20 diverse participants using the Merlin bird identification app to assess user perception.
  • The research reveals that actionable explanations enhance trust, improve collaboration with AI, and guide iterative XAI design.

Understanding How Explainability Can Support Human-AI Interaction

The paper "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction, presented at the CHI 2023 conference, explores the nuances of Explainable AI (XAI) from the perspective of end-users. Despite the proliferation of XAI methods, there is a palpable gap in effectively addressing the end-users' needs concerning AI explainability. This research aims to bridge this gap by delivering insights into user-specific requirements for enhancing human-AI interactions, particularly in real-world applications.

The study focuses on the Merlin bird identification app as a case study for real-world AI applications. Through a mixed-methods study involving 20 participants, the researchers sought to understand the end-users' needs, uses, and perceptions concerning XAI explanations in this context. The participants were varied in their AI and birding expertise, ensuring a diverse set of perspectives toward explainability.

The key findings from this study indicate that participants desired explanations that are practically useful and facilitate better collaboration with the AI system rather than just technical insights. Importantly, explanations that mirrored human reasoning processes, particularly those that are part-based such as concept-based and prototype-based explanations, resonated well with users, suggesting an affinity for explanations that align with intuitive human understanding.

Participants articulated a range of uses for XAI explanations beyond the mere understanding of AI outputs. They expressed intentions to use these explanations to calibrate their trust in AI, improve their skills in the related tasks, provide better inputs to the AI, and even give constructive feedback to developers for system improvement. Notably, the ability to use explanations to enhance collaboration with AI indicates a shift toward viewing AI as a teammate rather than a simplistic tool.

The study posits several implications for future XAI development:

  1. Human-Centric Design: Emphasize designing explanations that align with human cognitive processes and enhance user interaction with AI systems.
  2. Actionable Insights: Provide explanations that offer clear and actionable recommendations for users, aiding them in making informed decisions about interacting with AI systems.
  3. User-Centric Evaluation: Continuously engage with end-users to evaluate and iterate on XAI methods, ensuring alignment with user needs and contexts.

In summary, this paper contributes significantly to the discourse on XAI by foregrounding the user's perspective, which is crucial for practical deployments. Addressing the identified needs will likely enhance the usability and acceptance of AI systems in everyday contexts. As AI systems become increasingly integrated into daily life, fostering a human-centered approach in the development of XAI will be pivotal in advancing human-AI collaboration in diverse domains.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.