Papers
Topics
Authors
Recent
Search
2000 character limit reached

Less Discriminatory Alternative and Interpretable XGBoost Framework for Binary Classification

Published 24 Oct 2024 in stat.ML and cs.LG | (2410.19067v1)

Abstract: Fair lending practices and model interpretability are crucial concerns in the financial industry, especially given the increasing use of complex machine learning models. In response to the Consumer Financial Protection Bureau's (CFPB) requirement to protect consumers against unlawful discrimination, we introduce LDA-XGB1, a novel less discriminatory alternative (LDA) machine learning model for fair and interpretable binary classification. LDA-XGB1 is developed through biobjective optimization that balances accuracy and fairness, with both objectives formulated using binning and information value. It leverages the predictive power and computational efficiency of XGBoost while ensuring inherent model interpretability, including the enforcement of monotonic constraints. We evaluate LDA-XGB1 on two datasets: SimuCredit, a simulated credit approval dataset, and COMPAS, a real-world recidivism prediction dataset. Our results demonstrate that LDA-XGB1 achieves an effective balance between predictive accuracy, fairness, and interpretability, often outperforming traditional fair lending models. This approach equips financial institutions with a powerful tool to meet regulatory requirements for fair lending while maintaining the advantages of advanced machine learning techniques.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Consumer Financial Protection Bureau (2022). Consumer Financial Protection Circular 2022-03: Adverse action notification requirements in connection with credit decisions based on complex algorithms. May 26, 2022.
  2. Interpretable machine learning based on functional ANOVA framework: algorithms and comparisons. arXiv preprint:2305.15670.
  3. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30.
  4. Navas-Palencia, G. (2020). Optimal binning: mathematical programming formulation. arXiv preprint:2001.08025.
  5. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144).
  6. Siddiqi, Naeem (2006). Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring. John Wiley & Sons.
  7. Enhancing explainability of neural networks through architecture constraints. IEEE Transactions on Neural Networks and Learning Systems, 32(6):2610–2621.
  8. GAMI-Net: An explainable neural network based on generalized additive models with structured interactions. Pattern Recognition, 120:108192.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.