Papers
Topics
Authors
Recent
Search
2000 character limit reached

Causal ABA for Causal Discovery

Updated 22 February 2026
  • Causal ABA is a symbolic framework that integrates conditional independence tests and expert semantic priors to reconstruct directed acyclic graphs.
  • It encodes causal constraints into an assumption-based argumentation structure, using stable and grounded semantics to resolve conflicts and ensure robust inference.
  • Empirical evaluations show that the method produces accurate edge orientations and superior error metrics compared to traditional causal discovery techniques.

Causal Assumption-based Argumentation (Causal ABA) is a symbolic framework for integrating statistical and expert-derived constraints to reconstruct causal graphs—typically represented as directed acyclic graphs (DAGs)—from observational data. It leverages the formal apparatus of Assumption-Based Argumentation (ABA), a mature knowledge-representation formalism, to encode and resolve conflicts among causal constraints, thereby yielding causal models consistent with the input evidence and domain knowledge. Causal ABA finds application in causal discovery, ensuring rigorous correspondence between the set of accepted constraints and the d-separation relations of the inferred causal structure, while supporting principled incorporation of semantic priors, such as those elicited from LLMs (Li et al., 18 Feb 2026, Russo et al., 2024).

1. Formal Definition and Framework Construction

Let V={X1,,Xn}V = \{X_1, \ldots, X_n\} denote a set of observed variables. A causal DAG G=(V,E)G = (V, E) comprises directed edges EV×VE \subseteq V \times V encoding direct causal relations. Causal ABA encodes causal discovery as a two-level process:

  • Constraint Set Construction: Input constraints CC are collected as the union C=CCICLLMC = C^{CI} \cup C^{LLM}, where CCIC^{CI} comprises (possibly noisy) conditional independence (CI) statements derived from data, and CLLMC^{LLM} comprises semantic causal priors generated from LLM prompts or domain expertise. CI constraints are of the form c(X,Y;Z)c_{(X, Y; Z)}^\perp (XYZX \perp Y \mid Z) or c(X,Y;Z)¬c_{(X, Y; Z)}^{\neg \perp} (X̸ ⁣YZX \not\!\perp Y \mid Z), adjudicated by statistical tests (e.g., Wilks's G2G^2) with a significance threshold τ\tau set by α\alpha (Li et al., 18 Feb 2026). Semantic priors are derived by repeated LLM queries, extracting high-precision judgments CAUSES(X,Y)(X,Y) (required arrow) and ¬\negCAUSES(X,Y)(X,Y) (forbidden arrow), with consensus obtained via intersection over prompt replicates (Li et al., 18 Feb 2026).
  • ABA Argumentation Structure: Each constraint ciCc_i \in C is mapped to an assumption aia_i in an ABA framework AF=(Args,Attacks)AF = (\text{Args}, \text{Attacks}), where

Args={ai:ciC},Attacks={(ai,aj)ai¬cj by the inference rules R}.\text{Args} = \{ a_i : c_i \in C \}, \quad \text{Attacks} = \{ (a_i, a_j) \mid a_i \vdash \neg c_j \text{ by the inference rules } R \}.

Here, conflict (attack) structure is determined by acyclicity and d-separation inference rules, often implemented in Answer Set Programming (ASP) (Russo et al., 2024).

Accepted assumptions are computed according to stable semantics (or optionally grounded semantics), guaranteeing that the set of accepted constraints is both conflict-free and maximally informative with respect to the input DAG-theoretic rules (Li et al., 18 Feb 2026, Russo et al., 2024).

2. Constraint Encoding and Logical Rules

Causal ABA systematically encodes constraints and their logical interactions, capturing both structural and statistical information:

  • Edge and Non-Edge Assumptions: Each possible directed edge arrxyarr_{xy} and non-edge noexynoe_{xy} between x,yVx, y \in V is represented as an assumption. Contraries are defined so that, for each unordered pair, only one of {arrxy,arryx,noexy}\{arr_{xy}, arr_{yx}, noe_{xy}\} may be accepted, upholding the antisymmetry of the skeleton (Russo et al., 2024).
  • Conditional Independence/D-Separation Facts: Each CI test from data yields either an assumed independence (x ⁣ ⁣y ⁣ ⁣Z)(x \!\! y \!\! | Z) or dependence dep(x,yZ)dep(x, y \mid Z). These are connected by path-based inference rules expressing the equivalence between d-separation in the DAG and conditional independence in the joint distribution, under the Markov and faithfulness assumptions (Russo et al., 2024, Li et al., 18 Feb 2026).
  • Attack Relations: Attack relations derive from the impossibility of satisfying conflicting independence/dependence and edge configurations. For instance, two assumptions encoding XYX \perp Y and X̸ ⁣YZX \not\!\perp Y \mid Z may attack each other when ZZ conditions a collider (e.g., in the presence of v-structures). The inference rules RR enforce acyclicity and d-separation, with cyclicity violations or impossible path configurations generating attacks (Li et al., 18 Feb 2026, Russo et al., 2024).

3. Algorithmic Pipeline and Implementation

The Causal ABA inference pipeline operates as follows (Li et al., 18 Feb 2026, Russo et al., 2024):

  1. Semantic Priors Extraction: For each ordered pair (X,Y)(X, Y), LLM queries (typically 5 replicates) are issued using metadata on variable names/descriptions, collecting required and forbidden edges. Consensus sets PreqP_{req}, PforbidP_{forbid} are obtained by intersection.
  2. Conditional Independence Testing: For all unordered pairs (X,Y)(X, Y) and conditioning sets ZZ, the test statistic I(X;YZ)I(X; Y \mid Z) (such as G2G^2) is computed on data. If I<τI<\tau, record cX,Y;Zc^\perp_{X,Y;Z}; otherwise cX,Y;Z¬c^{\neg\perp}_{X,Y;Z}, where τ\tau is selected by significance level.
  3. Framework Assembly and Argumentation: The skeleton is initialized as the complete undirected graph and reduced by removing edges with strong marginal independence (cX,Y;c^\perp_{X,Y;\emptyset}). The full set of assumptions is encoded, and attacks are constructed via ASP, implementing acyclicity and d-separation. The stable extension EE^* is computed.
  4. DAG Extraction: The accepted assumptions define the oriented graph: required arrows are oriented, forbidden arrows suppressed, and the remaining structure is oriented using accepted d-separation constraints.

This methodology is implemented using answer set programming (clingo), with specific rule sets governing edge selection, path searching, and d-separation witness logic (Li et al., 18 Feb 2026, Russo et al., 2024).

4. Theoretical Properties and Guarantees

Soundness and completeness of Causal ABA follow directly under standard causal assumptions:

  • Soundness: Every edge in the output DAG GG^* is supported by a chain of accepted assumptions consistent with the input CI evidence and semantic priors, as dictated by the stable extension (Li et al., 18 Feb 2026, Russo et al., 2024).
  • Completeness: Any edge orientation that is identifiable from the union of CCIC^{CI} and CLLMC^{LLM} appears in GG^*, up to Markov equivalence. Any DAG disagreeing with accepted (in)dependence or semantic constraints is "attacked" and excluded under stable semantics (Li et al., 18 Feb 2026, Russo et al., 2024).
  • Faithfulness and Markov Condition: Only DAGs whose d-separations agree with the accepted CI and semantic constraints are allowed. Acceptance semantics ensure d-separation correspondence between the chosen extension and the output graph (Russo et al., 2024).

5. Empirical Evaluation and Protocols

Causal ABA has been evaluated on benchmark and synthetic datasets using standardized metrics and protocols:

  • Datasets: Standard Bayesian networks (ASIA, SACHS, etc.) from the bnlearn repository and "CauseNet" synthetic graphs derived from subgraph isomorphisms in the CauseNet knowledge base, with structure selected for semantic compactness and structural-semantic alignment (Li et al., 18 Feb 2026).
  • Bias Mitigation: Variable names/descriptions are randomized with LLMs prompted not to reveal causal structure, reducing the potential for memorization bias—ensuring evaluation reflects generalization rather than LLM rote recall (Li et al., 18 Feb 2026).
  • Metrics: Structural Hamming Distance (SHD), Structural Intervention Distance (SID), F1-score (adjacency/orientation), and edge precision & recall. Statistical significance is determined using two-sample unequal-variance t-tests with Benjamini–Hochberg correction (α=0.05\alpha=0.05) (Li et al., 18 Feb 2026).
  • Results: Causal ABA achieves state-of-the-art performance, surpassing constraint-based majority voting and note-based neural approaches in worst-case normalized SID and SHD, especially in scenarios with conflicting or ambiguous constraints (Li et al., 18 Feb 2026, Russo et al., 2024). ABA-based methods yield lower worst-case NSID than FGS, NOTEARS, or majority-PC baselines, and produce more accurate skeletons and edge orientations (Russo et al., 2024).

6. Limitations and Future Directions

Causal ABA as implemented exhibits several limitations (Li et al., 18 Feb 2026, Russo et al., 2024):

  • Computational Complexity: Enumeration of all argumentation extensions is exponential in the number of variables, though tractable up to n15n \approx 15 with preliminary skeleton reduction and weighted relaxation (Li et al., 18 Feb 2026).
  • Scalability: Exponential blow-up in the number of d-separation paths constrains practical application to moderate-sized graphs (d10d \leq 10) (Russo et al., 2024).
  • Causal Assumptions: The framework requires acyclicity, causal sufficiency (no hidden confounders), and faithfulness. Handling latent confounding or cycles necessitates generalization to Partial Ancestral Graphs or alternative semantics (Russo et al., 2024).
  • Statistical Error Handling: Treatment of conflicting or noisy CI evidence is heuristic (weight-based fact dropping or ranking); fully Bayesian extensions are not yet realized (Russo et al., 2024).
  • Implementation: Run-time is generally higher than for purely statistical methods, due to logic-based reasoning, though ASP-based implementations show tractability for small to mid-sized networks (Russo et al., 2024).

Planned extensions include incremental ASP solving, hybrid score-based constraint integration (e.g., BIC, Monte Carlo posteriors), and relaxation to partial/preferred ABA semantics for enhanced conflict tolerance. A plausible implication is that future Causal ABA frameworks will handle larger networks and more complex causal structures, including latent variables and cycles.

7. Worked Example

Consider V={E,R,O,I}V = \{E, R, O, I\} (education, race, occupation, income) (Li et al., 18 Feb 2026):

  • LLM semantic priors (consensus): Required: EOE \rightarrow O, OIO \rightarrow I. Forbidden: IRI \rightarrow R, RER \rightarrow E.
  • CI test constraints: ERE \perp R, E̸ ⁣ROE \not\!\perp R | O, EIOE \perp I | O.
  • Argumentation Framework: Constructed arguments/attacks based on d-separation rules (e.g., mutual attack between ERE \perp R and E̸ ⁣ROE \not\!\perp R | O; EIOE \perp I | O conflicts with required arrows).
  • Stable Extension: Eliminate assumptions with least credible evidence, producing a conflict-free, maximal accepted set.
  • Final DAG: Enforce required/forbidden arrows, orient ambiguous connections by remaining CI constraints/d-separation. Ground-truth structure is recovered, demonstrating transparent resolution of statistical and semantic evidence.

This approach illustrates the transparent, defeasible, and consistent integration of CI-based data evidence and semantic or expert knowledge for principled causal discovery, with one-to-one correspondence between accepted input constraints and the d-separations of the learned causal graph (Li et al., 18 Feb 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Causal Assumption-based Argumentation (Causal ABA).