Are Structural Causal Models the best framework for modeling human explanatory reasoning?

Determine whether Structural Causal Models (SCMs) are the best mathematical framework for modeling how humans construct and interpret explanations, in light of evidence that human cognition blends causal reasoning with spatial, temporal, and qualitative constraints that may not be naturally captured by SCMs.

Background

The paper argues that explainable AI is fundamentally a causal problem and adopts Structural Causal Models (SCMs) as a principled framework for explanations. However, it acknowledges a competing perspective that human explanatory reasoning often relies on intuitive theories involving objects, forces, and qualitative constraints, which may not be fully captured by SCMs’ variable-centric structural equations.

This concern raises uncertainty about whether SCMs can adequately represent how people construct and interpret explanations. The authors note that recent approaches such as causal abstraction and neuro-symbolic reasoning may bridge the gap, but this remains an unresolved area.

References

Unlike SCMs, which encode causal mechanisms as structured equations over variables, human cognition often blends causal reasoning with spatial, temporal, and qualitative constraints, making it unclear whether SCMs are the best mathematical framework for modeling how people construct and interpret explanations.

Position: Explainable AI is Causality in Disguise  (2603.28597 - Karimi, 30 Mar 2026) in Subsection "Alternative Views: Possible Limitations of SCMs for Representing Human Intuition"