Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Inductive Biases of Demographic Parity-based Fair Learning Algorithms

Published 28 Feb 2024 in cs.LG, cs.AI, cs.IT, and math.IT | (2402.18129v2)

Abstract: Fair supervised learning algorithms assigning labels with little dependence on a sensitive attribute have attracted great attention in the machine learning community. While the demographic parity (DP) notion has been frequently used to measure a model's fairness in training fair classifiers, several studies in the literature suggest potential impacts of enforcing DP in fair learning algorithms. In this work, we analytically study the effect of standard DP-based regularization methods on the conditional distribution of the predicted label given the sensitive attribute. Our analysis shows that an imbalanced training dataset with a non-uniform distribution of the sensitive attribute could lead to a classification rule biased toward the sensitive attribute outcome holding the majority of training data. To control such inductive biases in DP-based fair learning, we propose a sensitive attribute-based distributionally robust optimization (SA-DRO) method improving robustness against the marginal distribution of the sensitive attribute. Finally, we present several numerical results on the application of DP-based learning methods to standard centralized and distributed learning problems. The empirical findings support our theoretical results on the inductive biases in DP-based fair learning algorithms and the debiasing effects of the proposed SA-DRO method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012.
  2. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
  3. Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pages 643–650. IEEE, 2011.
  4. Fairness for robust log loss classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5511–5518, 2020.
  5. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340, 2018.
  6. A fair classifier using mutual information. In 2020 IEEE International Symposium on Information Theory (ISIT), pages 2521–2526. IEEE, 2020.
  7. Fr-train: A mutual information-based approach to fair and robust training. In International Conference on Machine Learning, pages 8147–8157. PMLR, 2020.
  8. Fairness constraints: Mechanisms for fair classification. In Artificial intelligence and statistics, pages 962–970. PMLR, 2017.
  9. Putting fairness principles into practice: Challenges, metrics, and improvements. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 453–459, 2019.
  10. Toward a better trade-off between performance and fairness with kernel-based distribution matching. arXiv preprint arXiv:1910.11779, 2019.
  11. A fair classifier using kernel density estimation. Advances in neural information processing systems, 33:15088–15099, 2020.
  12. Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning, pages 4382–4391. PMLR, 2019.
  13. Renyi fair inference. arXiv preprint arXiv:1906.12005, 2019.
  14. Fairness-aware neural renyi minimization for continuous features. arXiv preprint arXiv:1911.04929, 2019.
  15. Learning unbiased representations via rényi minimization. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part II 21, pages 749–764. Springer, 2021.
  16. A stochastic optimization framework for fair risk minimization. Transactions on Machine Learning Research, 2022.
  17. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259–268, 2015.
  18. Learning fair representations. In International conference on machine learning, pages 325–333. PMLR, 2013.
  19. Optimized pre-processing for discrimination prevention. Advances in neural information processing systems, 30, 2017.
  20. On fairness and calibration. Advances in neural information processing systems, 30, 2017.
  21. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, pages 1929–1938. PMLR, 2018.
  22. Robust optimization for fairness with noisy protected groups. Advances in neural information processing systems, 33:5190–5203, 2020.
  23. Fairness without demographics through adversarially reweighted learning. Advances in neural information processing systems, 33:728–740, 2020.
  24. Stochastic gradient methods for distributionally robust optimization with f-divergences. Advances in neural information processing systems, 29, 2016.
  25. Adaptive distributionally robust optimization. Management Science, 65(2):604–618, 2019.
  26. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659, 2019.
  27. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018):11, 2018.
  28. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  29. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  30. Gustavo L Gilardoni. On the minimum f-divergence for given total variation. Comptes rendus. Mathématique, 343(11-12):763–766, 2006.
  31. On maximal correlation, mutual information and data privacy. In 2015 IEEE 14th Canadian workshop on information theory (CWIT), pages 27–31. IEEE, 2015.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.