Papers
Topics
Authors
Recent
Search
2000 character limit reached

Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Published 16 Jun 2025 in stat.ML, cs.AI, and cs.LG | (2506.13900v1)

Abstract: Cooperative game theory has become a cornerstone of post-hoc interpretability in machine learning, largely through the use of Shapley values. Yet, despite their widespread adoption, Shapley-based methods often rest on axiomatic justifications whose relevance to feature attribution remains debatable. In this paper, we revisit cooperative game theory from an interpretability perspective and argue for a broader and more principled use of its tools. We highlight two general families of efficient allocations, the Weber and Harsanyi sets, that extend beyond Shapley values and offer richer interpretative flexibility. We present an accessible overview of these allocation schemes, clarify the distinction between value functions and aggregation rules, and introduce a three-step blueprint for constructing reliable and theoretically-grounded feature attributions. Our goal is to move beyond fixed axioms and provide the XAI community with a coherent framework to design attribution methods that are both meaningful and robust to shifting methodological trends.

Summary

  • The paper proposes a novel framework using Weber and Harsanyi sets to overcome limitations of Shapley values in feature attribution.
  • The paper outlines a three-step blueprint for defining value functions, selecting custom attribution schemes, and designing effective allocation methods.
  • The paper demonstrates that permutation-based and dividend-based allocations offer nuanced, context-specific explanations in explainable AI.

Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Introduction

The interpretation of machine learning models has increasingly leveraged cooperative game theory, primarily through the use of Shapley values, which allocate a value to each feature based on their contribution to the predictive model. Despite their popularity, Shapley-based methods have been criticized for their axiomatic justifications, which may not fully capture the complexities of feature importance and may lead to ambiguous interpretations. This paper proposes a broader exploration of cooperative game theory, introducing the Weber and Harsanyi sets as alternative allocation frameworks that offer greater interpretative flexibility than Shapley values.

Main Contributions

Richness of Cooperative Games for Model Interpretation

The paper provides an overview of the potential of cooperative game theory for constructing feature attributions beyond Shapley values. It introduces the Weber and Harsanyi sets, which generalize Shapley allocations, and emphasizes their practical implications for explainable AI (XAI). These sets allow for more tailored feature attribution methods, demystifying their mathematical underpinnings and challenging the reliance on classical axiomatic principles.

Game Theoretic Feature Attributions Blueprint

The authors present a methodological framework for developing game-theoretic feature attribution methods. This involves selecting value functions and allocation methods that are specifically suited to model interpretation. The Weber and Harsanyi sets facilitate creating custom allocation schemes that can be adapted to specific interpretability goals, supported by recent theoretical advancements.

Cooperative Games and Allocations

In cooperative game theory, players form coalitions and receive payoffs for their contributions. A cooperative game is defined by a set of players and a value function. Allocations distribute the total value among players. The paper introduces the Weber set, based on random order distributions, and the Harsanyi set, grounded in dividend redistribution. Each set offers unique theoretical justifications and efficiency in allocation design.

Weber Set: Permutation-Based Allocations

The Weber set employs permutations to determine player payoffs. Under this framework, Shapley values emerge as a uniform distribution over permutations, emphasizing equal contributions from all feature orders. Choosing different distributions allows for alternative interpretations and custom attribution schemes. The authors argue for the benefits of considering these different distributions for tasks like feature attribution.

Harsanyi Set: Dividend-Based Allocations

Harsanyi allocations use coalition dividends to distribute payoffs, with Shapley values representing an egalitarian redistribution. This approach provides a more nuanced interpretation of cooperative games by considering the collective benefits of coalitions. The insights from Harsanyi dividends prompt the development of novel, purpose-specific allocation schemes in XAI, influencing the assessment of feature importance and interaction.

Blueprint for Leveraging Cooperative Games

The authors outline a three-step approach for using cooperative games in model interpretation:

  1. Define Quantity of Interest: Identify a model prediction or variance as the primary focus, guiding the interpretability study.
  2. Select Value Function: Choose a value function that aligns with the defined quantity of interest and respects feature dependencies for accurate representation.
  3. Design Allocation Scheme: Apply an efficient allocation method, emphasizing the interpretative potential and practical relevance for the specific problem context.

Value Functions and Model Representations

The choice of value function significantly impacts the interpretability and reliability of feature attributions. Recent theoretical work examines orthogonal and oblique projections to ensure purity in value function choice. These developments reinforce the necessity of theoretically sound value functions in capturing model dynamics accurately, arguing for more strategic and principle-driven methods.

Purposeful Allocations

The authors encourage exploration beyond Shapley values toward allocations specifically designed for interpretability tasks. The Weber and Harsanyi frameworks offer flexibility to construct meaningful attributions suited to the unique demands of XAI. Concepts such as proportional marginal effects (PME) demonstrate the advantage of adapting allocation methodologies to specific computational requirements and correlations within the data.

Conclusion

This paper recommends using cooperative game theory beyond Shapley values, leveraging the Weber and Harsanyi sets for more adaptable and theoretically grounded model interpretations. It stresses the importance of efficient, contextually relevant feature attributions in advancing the XAI field. Overall, these strategies lay the groundwork for developing robust and meaningful attributions compatible with evolving insights and methodologies.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 5 likes about this paper.