Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features

Published 30 Aug 2024 in cs.AI and cs.LG | (2408.17198v2)

Abstract: Explainable Artificial Intelligence (XAI) plays a crucial role in fostering transparency and trust in AI systems, where traditional XAI approaches typically offer one level of abstraction for explanations, often in the form of heatmaps highlighting single or multiple input features. However, we ask whether abstract reasoning or problem-solving strategies of a model may also be relevant, as these align more closely with how humans approach solutions to problems. We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features, thereby capturing the abstract reasoning behind a model's predictions. The methodology is built upon a simple yet general multi-order decomposition of model predictions. This decomposition can be specified using higher-order propagation-based relevance methods, such as GNN-LRP, or perturbation-based explanation methods commonly used in XAI. The effectiveness of our framework is demonstrated in the domains of NLP, vision, and quantum chemistry (QC), where abstract symbolic domain knowledge is abundant and of significant interest to users. The Symbolic XAI framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable through logical formulas.

Citations (1)

Summary

  • The paper presents a novel framework that uses multi-order decomposition and logical queries to explain AI model predictions.
  • It employs both propagation-based and perturbation-based methods to assign relevance scores to feature interactions, enhancing model interpretability.
  • Empirical evaluations in NLP, CV, and QC demonstrate that Symbolic XAI effectively bridges complex model reasoning with human analytic approaches.

Towards Symbolic XAI: Explanation Through Human-Understandable Logical Relationships Between Features

The paper, "Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features," authored by Thomas Schnake, Farnoush Rezaei Jafari, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, and Klaus-Robert Müller, introduces an innovative framework in the sphere of Explainable Artificial Intelligence (XAI). This novel framework, termed Symbolic XAI, aims to augment the interpretability of machine learning models by attributing relevance to symbolic queries that encapsulate logical relationships between input features.

Overview and Motivation

Traditional approaches in XAI predominantly focus on generating feature attribution heatmaps which highlight the significant single or multiple input features. These methods are typically constrained to first-order explanations. However, this paper posits that incorporating more abstract reasoning, akin to human problem-solving strategies, can enhance the transparency and comprehensibility of AI models.

The Symbolic XAI framework is built upon a multi-order decomposition of model predictions. This decomposition can either utilize higher-order propagation-based methods such as GNN-LRP or perturbation-based explanation methods. The primary objective is to capture abstract reasoning in models' decision-making processes, thereby aligning more closely with human analytic approaches.

Symbolic XAI Framework

Multi-Order Decomposition: At the core of the Symbolic XAI framework is the multi-order decomposition of model predictions, which expresses the prediction as a sum of contributions from different feature subsets. This is formalized as: f(X)=LNμL,f(\bm{X}) = \sum_{\mathcal{L} \subseteq \mathcal{N} } \mu_\mathcal{L}, where each term μL\mu_{\mathcal{L}} represents the unique contribution of the feature subset L\mathcal{L}.

Methods for Decomposition: The decomposition can be specified using two principal approaches:

  1. Propagation-Based Approach: Utilizing LRP techniques extended to higher orders (e.g., GNN-LRP), where relevance scores are assigned to sequences of feature indices.
  2. Perturbation-Based Approach: Estimating the model's prediction for smaller input areas by perturbing features not in the subset, using methods like the Harsanyi dividend for computation.

Relevance Attribution of Logical Formulas (Queries): The framework also introduces a mechanism to calculate the relevance of logical formulas (queries) that express relationships between features. These queries use a combination of logical conjunction (\wedge) and negation (¬\neg), attributed with relevance scores through a mapping to the multi-order decomposition terms.

Automated Query Search: Another critical component is the automatic search for the most expressive queries that best describe the model's decision-making process. By optimizing a similarity measure, these queries can be identified efficiently, ensuring that users receive the most pertinent abstract explanations.

Empirical Evaluation and Applications

The Symbolic XAI framework's versatility and effectiveness are demonstrated through application across multiple domains, specifically NLP, Computer Vision (CV), and Quantum Chemistry (QC).

  1. NLP: The framework was tested on sentiment analysis tasks using the SST and Movie Reviews datasets. Through the evaluation via input flipping strategies and comparison with ground-truth annotations, the relevance scores obtained for logical queries were shown to significantly align with human interpretability, capturing contextual dependencies and nuanced sentiment accurately.
  2. CV: Applied to a vision transformer model for facial expression recognition (FER), the Symbolic XAI framework revealed that the model's decision-making heavily relied on complex interactions between interpretable segments like mouth and eyes regions. The framework was able to predict the model’s performance on unseen, augmented data (e.g., faces with obscured features) by assigning relevance to logical queries of feature absence.
  3. QC: The paper also demonstrated the utility of Symbolic XAI in explaining models used for molecular dynamics (MD) simulations of proton transfer reactions. By attributing relevance to interactions between specific atoms, the framework provided insights that paralleled known chemical intuitions, validating the detailed interaction mechanisms in molecular trajectories.

Implications and Future Directions

The Symbolic XAI framework presents significant practical and theoretical implications. Practically, it facilitates the translation of complex model predictions into human-understandable formats by leveraging logical relationships, thereby fostering trust and transparency in AI systems. Theoretically, it bridges the gap between human reasoning and AI decision-making by aligning model explanations with abstract reasoning strategies.

Regarding future developments, the paper suggests exploring the complexity of this framework and refining the automated search for queries. This could involve developing methods for distilling unimportant terms in the multi-order decomposition or finding new ways to compute query relevance efficiently. The potential for further automating meaningful query identification also holds promise for enhancing the framework’s applicability.

In summary, the Symbolic XAI framework stands out as a robust approach to providing human-readable, abstract, and customizable explanations for AI model predictions across various domains, enhancing both interpretability and transparency.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 124 likes about this paper.