Papers
Topics
Authors
Recent
Search
2000 character limit reached

Designing Effective AI Explanations for Misinformation Detection: A Comparative Study of Content, Social, and Combined Explanations

Published 3 Sep 2025 in cs.HC and cs.MM | (2509.03693v1)

Abstract: In this paper, we study the problem of AI explanation of misinformation, where the goal is to identify explanation designs that help improve users' misinformation detection abilities and their overall user experiences. Our work is motivated by the limitations of current Explainable AI (XAI) approaches, which predominantly focus on content explanations that elucidate the linguistic features and sentence structures of the misinformation. To address this limitation, we explore various explanations beyond content explanation, such as "social explanation" that considers the broader social context surrounding misinformation, as well as a "combined explanation" where both the content and social explanations are presented in scenarios that are either aligned or misaligned with each other. To evaluate the comparative effectiveness of these AI explanations, we conduct two online crowdsourcing experiments in the COVID-19 (Study 1 on Prolific) and Politics domains (Study 2 on MTurk). Our results show that AI explanations are generally effective in aiding users to detect misinformation, with effectiveness significantly influenced by the alignment between content and social explanations. We also find that the order in which explanation types are presented - specifically, whether a content or social explanation comes first - can influence detection accuracy, with differences found between the COVID-19 and Political domains. This work contributes towards more effective design of AI explanations, fostering a deeper understanding of how different explanation types and their combinations influence misinformation detection.

Summary

  • The paper demonstrates that aligned AI explanations notably increase misinformation detection accuracy compared to misaligned ones.
  • It employs GPT-4 to generate both content-based and socio-contextual explanations across COVID-19 and political domains.
  • User responses indicate that while aligned explanations boost confidence, misaligned ones stimulate analytical thinking.

Designing Effective AI Explanations for Misinformation Detection: A Comparative Study

Introduction

This paper examines the design and efficacy of AI explanations in the context of misinformation detection. The study investigates three types of explanations: content, social, and combined explanations, to enhance the ability of users to identify misinformation. Recognizing that previous AI explainability methods primarily emphasize content explanations focusing on linguistic features, this research innovates by incorporating social explanations which consider the broader social context of information dissemination.

Explanation Generation Process

The explanation generation mechanism is depicted in an overview process involving content and socio-contextual analyses. Initially, content cues—such as syntactic, semantic, and structural aspects—are isolated to form content explanations. Concurrently, socio-contextual elements like speaker attributes and information dissemination contexts are evaluated to produce social explanations. These explanations are generated utilizing GPT-4, aiming to correlate linguistic patterns and social contexts with the credibility of claims. Figure 1

Figure 1: Overview of the Explanation Generation Process Using GPT.

Experimental Design

Two online studies were conducted across different misinformation domains, specifically COVID-19 and Politics, with participants sourced from Prolific and MTurk. Participants received explanations classified into five conditions: no explanation, content, social, aligned (consensus between content and social explanations), and misaligned (conflict between content and social explanations). Figure 2

Figure 2: Examples of AI Explanations Shown to Participants. Participants in the content explanation condition were shown explanations in an orange box, while participants in the social explanation were shown explanations in the blue box.

Results and Analysis

Detection Accuracy

The experiments reveal that aligned explanations significantly enhance misinformation detection accuracy. Participants exposed to aligned explanations displayed higher accuracy compared to those who received misaligned explanations. Moreover, presentation order influenced effectiveness, with aligned explanations, particularly in the COVID-19 domain, demonstrating more substantial impacts when social explanations were presented first. This contrast was not observed in the political domain, suggesting different processing mechanisms are activated depending on domain-specific factors. Figure 3

Figure 3: Decision changes across all conditions: Effects of presenting misaligned (3a), aligned (3b), content (3c), and social (3d) explanations.

User Perception

User experiences with explanations varied. Aligned explanations were perceived as more useful and fostering greater understanding of AI-inferred decisions. Interestingly, although misaligned explanations did not improve decision accuracy, they were found to sometimes encourage critical thinking, as participants actively scrutinized conflicting information. This contradiction underscores the complexity in users' information processing when faced with dual-conflict explanations. Figure 4

Figure 4: Confidence changes across all conditions: Effects of presenting misaligned (4a), aligned (4b), content (4c), and social (4d) explanations.

Discussion

The findings highlight a dual nature in explanation-type effectiveness. Aligned explanations distinctly improve detection outcomes, yet misaligned explanations provide cognitive engagement, suggesting a potential role in nurturing analytical thinking—even if not always improving immediate detection accuracy.

The research further clarifies the criticality of explanation alignment, establishing that while misaligned explanations have limitations, they also invite users to deeply engage with content. Different explanation types trigger varied cognitive processing—either heuristic or analytical—depending on the misinformation domain.

Conclusion

The study advances understanding on designing AI explanations by emphasizing the importance of explanation alignment and sequence. It also opens potential directions for enhancing user-centric XAI systems in misinformation detection. Subsequent research should explore more refined methods for balancing explanation alignment, user perception, and domain-specific approaches to misinformation challenges. Figure 5

Figure 5: Decision changes under misaligned conditions: Effects of presenting content (5a) and social (5b) explanations first; Effects of presenting accurate (5c) and inaccurate (5d) explanations first.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.