Papers
Topics
Authors
Recent
Search
2000 character limit reached

Explainable AI for Trees: From Local Explanations to Global Understanding

Published 11 May 2019 in cs.LG, cs.AI, and stat.ML | (1905.04610v1)

Abstract: Tree-based machine learning models such as random forests, decision trees, and gradient boosted trees are the most popular non-linear predictive models used in practice today, yet comparatively little attention has been paid to explaining their predictions. Here we significantly improve the interpretability of tree-based models through three main contributions: 1) The first polynomial time algorithm to compute optimal explanations based on game theory. 2) A new type of explanation that directly measures local feature interaction effects. 3) A new set of tools for understanding global model structure based on combining many local explanations of each prediction. We apply these tools to three medical machine learning problems and show how combining many high-quality local explanations allows us to represent global structure while retaining local faithfulness to the original model. These tools enable us to i) identify high magnitude but low frequency non-linear mortality risk factors in the general US population, ii) highlight distinct population sub-groups with shared risk characteristics, iii) identify non-linear interaction effects among risk factors for chronic kidney disease, and iv) monitor a machine learning model deployed in a hospital by identifying which features are degrading the model's performance over time. Given the popularity of tree-based machine learning models, these improvements to their interpretability have implications across a broad set of domains.

Citations (265)

Summary

  • The paper develops TreeExplainer to compute Shapley values for tree ensembles efficiently, ensuring local accuracy and consistency.
  • The paper introduces SHAP interaction values to capture and distinguish feature interactions in individual predictions.
  • The paper aggregates local explanations into global insights using SHAP plots, revealing critical patterns and anomalies in model behavior.

Explainable AI for Trees: From Local Explanations to Global Understanding

The paper "Explainable AI for Trees: From Local Explanations to Global Understanding" presents a significant advancement in the field of explainable AI (XAI) by focusing on tree-based models such as random forests, decision trees, and gradient boosted trees. These models are widely employed in various industries due to their efficacy in handling non-linear data structures. However, the interpretability of these models has lagged, particularly in producing explanations for individual predictions (local explanations). This research addresses this gap through three pivotal contributions, leveraging game theory to enhance the explainability of tree-based models.

The first contribution is the development of TreeExplainer, which computes Shapley values for tree ensembles in polynomial time, thus overcoming the computational challenge traditionally associated with Shapley value computation. Shapley values offer an optimal solution for feature attribution in cooperative game theory, ensuring properties like local accuracy and consistency. Prior to this, solutions for Shapley values in the context of machine learning were considered NP-hard. TreeExplainer, however, makes this feasible for tree-based models, providing exact computation without relying on approximations, thus ensuring consistent and robust explanations.

The second contribution involves extending local explanations to capture feature interactions explicitly. This is achieved by introducing SHAP interaction values, which are computed using a generalization of the Shapley value framework. These interaction values allow practitioners to distinguish between main effects and interaction effects between features in individual predictions, providing deeper insights into the model's behavior. For instance, the interaction between age and blood pressure can significantly affect mortality risk predictions, which SHAP interaction values successfully highlight.

Lastly, the paper presents a suite of tools that aggregate local explanations to derive global insights into the behavior of tree-based models. This includes methods such as SHAP summary plots and dependence and interaction plots. These tools illustrate how individual features influence predictions across a dataset, revealing patterns and anomalies not visible through traditional global interpretability methods. For example, SHAP summary plots reveal that infrequent but critical health indicators significantly impact predictions, emphasizing rare but high-magnitude effects that conventional global importances might miss.

The implications of this work are profound for domains requiring transparent decision-making processes, such as healthcare and finance. By providing both local and global explanations of prediction models, stakeholders can understand and trust the predictions, facilitating better decision-making and identifying potential biases or inaccuracies in model predictions. Moreover, the insights into feature interactions enable a more nuanced understanding of the underlying data relationships, which is crucial for areas like personalized medicine.

The research promotes a broader adoption of tree-based models in high-stakes settings by significantly enhancing their interpretability. It also opens new avenues for exploring feature interactions and their implications in different applications. Future work could focus on extending these methods to other model classes, improving computational efficiency further, or exploring the integration of these interpretability tools into real-time systems. This research marks a critical step towards more transparent, understandable, and trustworthy AI systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.