Papers
Topics
Authors
Recent
Search
2000 character limit reached

P2C: Path to Counterfactuals

Published 28 Aug 2025 in cs.AI, cs.LG, and cs.LO | (2508.20371v1)

Abstract: Machine-learning models are increasingly driving decisions in high-stakes settings, such as finance, law, and hiring, thus, highlighting the need for transparency. However, the key challenge is to balance transparency -- clarifying why' a decision was made -- with recourse: providing actionable steps onhow' to achieve a favourable outcome from an unfavourable outcome. Counterfactual explanations reveal why' an undesired outcome occurred andhow' to reverse it through targeted feature changes (interventions). Current counterfactual approaches have limitations: 1) they often ignore causal dependencies between features, and 2) they typically assume all interventions can happen simultaneously, an unrealistic assumption in practical scenarios where actions are typically taken in a sequence. As a result, these counterfactuals are often not achievable in the real world. We present P2C (Path-to-Counterfactuals), a model-agnostic framework that produces a plan (ordered sequence of actions) converting an unfavourable outcome to a causally consistent favourable outcome. P2C addresses both limitations by 1) Explicitly modelling causal relationships between features and 2) Ensuring that each intermediate state in the plan is feasible and causally valid. P2C uses the goal-directed Answer Set Programming system s(CASP) to generate the plan accounting for feature changes that happen automatically due to causal dependencies. Furthermore, P2C refines cost (effort) computation by only counting changes actively made by the user, resulting in realistic cost estimates. Finally, P2C highlights how its causal planner outperforms standard planners, which lack causal knowledge and thus can generate illegal actions.

Summary

  • The paper introduces a model-agnostic planning framework, P2C, that generates ordered and causally consistent counterfactual paths.
  • The paper leverages answer set programming and rule-based surrogates to capture causal dependencies and compute minimal-cost direct interventions.
  • The paper demonstrates improved counterfactual proximity and 100% causal compliance across varied datasets, ensuring realistic and executable recourse.

P2C: A Model-Agnostic Framework for Causally Compliant Counterfactual Planning

Introduction

The "P2C: Path to Counterfactuals" paper addresses a critical gap in the generation of counterfactual explanations for ML models, particularly in high-stakes decision-making domains. Existing counterfactual methods often neglect causal dependencies among features and assume that all interventions can be performed simultaneously, which is rarely feasible in real-world scenarios. P2C introduces a model-agnostic, planning-based framework that generates ordered, causally consistent intervention sequences, ensuring that each intermediate state is both feasible and actionable. The approach leverages Answer Set Programming (ASP), specifically the s(CASP) system, to encode and reason about both decision and causal rules, producing realistic and minimal-cost recourse paths.

Problem Formulation and Motivation

Counterfactual explanations serve two purposes: (1) elucidating why a particular decision was made, and (2) providing actionable guidance on how to achieve a desired outcome. However, most prior work either ignores causal structure or returns unordered sets of interventions, leading to recommendations that are not implementable or may violate domain constraints. For example, directly increasing a credit score is not a valid action; instead, one must act on upstream variables such as debt.

P2C formalizes the counterfactual generation problem as a planning task over a causally consistent state space. The framework distinguishes between direct actions (user-initiated feature changes) and causal actions (automatic downstream changes induced by direct interventions), and computes the minimal set of user actions required to reach a favorable outcome.

Methodology

Rule Extraction and Causal Model Construction

P2C operates in three main stages:

  1. Rule Extraction: The black-box classifier is approximated by a rule-based surrogate using FOLD-SE, a scalable and explainable rule-based machine learning algorithm. If a causal model is not provided, FOLD-SE is also used to learn candidate causal rules, which are then validated by domain experts to ensure true causality rather than spurious correlation.
  2. Minimal Causally Compliant Counterfactual (MCCC) Search: Given the decision and causal rules, P2C searches for the minimal-cost counterfactual that is causally consistent. The cost metric is refined to count only direct, user-initiated changes, not those that occur automatically due to causal dependencies.
  3. Path Planning: Using s(CASP), P2C treats the counterfactual search as a planning problem, generating an ordered sequence of interventions from the initial (negative) state to the goal (positive) state. Each intermediate state is checked for causal and decision consistency.

Formal Definitions

  • State Space (SS): All possible combinations of feature values.
  • Causally Consistent State Space (SCS_C): Subset of SS where all causal rules are satisfied.
  • Decision Consistent State Space (SQS_Q): Subset of SCS_C where decision rules are satisfied.
  • Actions (AA): Partitioned into direct and causal actions.
  • Transition Function (δ\delta): Maps a state and action to a new causally consistent state.
  • Goal Set (GG): States in SCS_C that do not satisfy the decision rules (i.e., favorable outcomes).
  • Solution Path: Sequence of states from the initial state ii to a goal state g∈Gg \in G, with each transition respecting causal constraints.

Algorithmic Framework

P2C's core algorithms are as follows:

  • extract_logic: Approximates the black-box model with a rule-based surrogate.
  • min_cf: Searches for the minimal-cost, causally compliant counterfactual.
  • find_path: Constructs a feasible, causally consistent path from the initial state to the counterfactual.

The planning is implemented in s(CASP), which supports goal-directed, non-grounded execution of ASP programs, allowing for efficient backtracking and justification.

Pseudocode Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def extract_logic(model, data, rbml_algo):
    if is_rule_based(model):
        return model
    labels = model.predict(data)
    return rbml_algo.train(data, labels)

def min_cf(initial_state, state_space, weights, causal_rules, decision_rules):
    best_cost = float('inf')
    best_state = None
    for s in state_space:
        is_valid, adj_weights = is_counterfactual(s, causal_rules, decision_rules, weights)
        if is_valid:
            cost = compute_weighted_distance(initial_state, s, adj_weights)
            if cost < best_cost:
                best_cost = cost
                best_state = s
    return best_state, best_cost

def find_path(initial_state, state_space, weights, causal_rules, decision_rules, goal_state, actions):
    visited_states = [(initial_state, [])]
    while get_last(visited_states) != goal_state:
        visited_states = intervene(visited_states, causal_rules, actions)
    candidate_path = drop_inconsistent(visited_states)
    return candidate_path

Handling Causal Dependencies

P2C encodes both direct and indirect (causal) dependencies. For example, to increase a credit score, the system recognizes that debt must be cleared first, and encodes this as a causal rule. The planner ensures that no illegal actions (e.g., directly setting credit score) are included in the intervention path.

Cost Computation

Unlike prior methods, P2C's cost metric only accounts for direct interventions, not automatic causal effects. This leads to more realistic and actionable recourse recommendations.

Experimental Evaluation

Datasets and Baselines

P2C is evaluated on the Adult, German Credit, and Car Evaluation datasets. Baselines include C3G (an ASP-based causally compliant counterfactual generator), MINT, DiCE, MACE, and standard planners.

Results

  • Counterfactual Proximity: P2C consistently produces counterfactuals with lower or equal L0, L1, and L2 distances compared to C3G, especially in datasets with nontrivial causal dependencies. This is attributed to P2C's refined cost metric and explicit handling of causal effects.
  • Causal Compliance: P2C achieves 100% causal compliance and always generates paths with only legal actions, unlike standard planners or other counterfactual methods.
  • Scalability: The search space can be large, but P2C introduces a placeholder mechanism to consolidate feature values not relevant to decision/causal rules, significantly reducing computational cost without loss of fidelity.
  • Path Quality: Only P2C guarantees that the generated intervention path is both causally compliant and executable in practice. Competing methods either ignore causal structure or produce unordered sets of interventions.

Implementation Considerations

  • Computational Requirements: The main bottleneck is the combinatorial search over the state space, especially for high-dimensional tabular data. The placeholder mechanism and efficient ASP execution in s(CASP) mitigate this to some extent.
  • Model-Agnosticism: P2C can be applied to any classifier (statistical or rule-based) as long as a rule-based surrogate can be learned.
  • Domain Knowledge: For accurate causal modeling, domain expert validation of learned causal rules is recommended.
  • Deployment: P2C is currently limited to tabular data. Extending to non-tabular domains (e.g., images) would require new methods for extracting interpretable causal and decision rules.

Theoretical and Practical Implications

P2C advances the state of the art in counterfactual explanation by:

  • Providing a formal, planning-based approach to recourse that respects both causal structure and action feasibility.
  • Enabling realistic, stepwise intervention recommendations, which are critical for user trust and regulatory compliance in high-stakes applications.
  • Demonstrating that explicit causal modeling and cost-aware planning yield more efficient and actionable recourse than prior methods.

Theoretically, P2C's use of ASP allows for expressive modeling of cyclic and context-dependent causal relationships, which are not easily handled by DAG-based SCMs.

Future Directions

  • Scalability: Further work is needed to handle very high-dimensional or continuous domains, possibly via abstraction or sampling techniques.
  • Non-Tabular Data: Extending P2C to domains such as images or text will require new approaches for extracting and validating causal and decision rules.
  • Interactive Recourse: Incorporating user preferences and constraints into the planning process could further enhance the practicality of generated recourse paths.

Conclusion

P2C provides a principled, model-agnostic framework for generating causally compliant, minimal-cost counterfactual recourse paths. By leveraging ASP and explicit causal modeling, it overcomes key limitations of prior work, producing actionable and realistic intervention sequences. While computationally intensive, the approach is scalable to moderate-sized tabular datasets and sets a foundation for future research in causally aware, explainable AI recourse.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

alphaXiv

  1. P2C: Path to Counterfactuals (6 likes, 0 questions)