Reranking individuals: The effect of fair classification within-groups
Abstract: AI finds widespread application across various domains, but it sparks concerns about fairness in its deployment. The prevailing discourse in classification often emphasizes outcome-based metrics comparing sensitive subgroups without a nuanced consideration of the differential impacts within subgroups. Bias mitigation techniques not only affect the ranking of pairs of instances across sensitive groups, but often also significantly affect the ranking of instances within these groups. Such changes are hard to explain and raise concerns regarding the validity of the intervention. Unfortunately, these effects remain under the radar in the accuracy-fairness evaluation framework that is usually applied. Additionally, we illustrate the effect of several popular bias mitigation methods, and how their output often does not reflect real-world scenarios.
- Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943, 2018.
- Fairlearn: A toolkit for assessing and improving fairness in ai. Microsoft, Tech. Rep. MSR-TR-2020-32, 2020.
- Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the conference on fairness, accountability, and transparency, pages 319–328, 2019.
- A comprehensive empirical study of bias mitigation methods for machine learning classifiers. ACM Transactions on Software Engineering and Methodology, 32(4):1–30, 2023.
- S. Corbett-Davies and S. Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023, 2018.
- P. Cortez and A. M. G. Silva. Using data mining to predict secondary school student performance. EUROSIS-ETI, pages 5–12, 2008.
- J. Dastin. Amazon scraps secret ai recruiting tool that showed bias against women. In Ethics of data and analytics, pages 296–299. Auerbach Publications, 2022.
- How to be fair? a study of label and selection bias. Machine Learning, 112(12):5081–5104, 2023.
- Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259–268, 2015.
- Garbage in, garbage out? do machine learning application papers in social computing report where human-labeled training data comes from? In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 325–336, 2020.
- Precof: counterfactual explanations for fairness. Machine Learning, pages 1–32, 2023.
- The crucial role of sensitive attributes in fair classification. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 2993–3002. IEEE, 2020.
- D. J. Hand. Measuring classifier performance: a coherent alternative to the area under the roc curve. Machine learning, 77(1):103–123, 2009.
- Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
- Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–16, 2019.
- Fairea: A model behaviour mutation approach to benchmarking bias mitigation methods. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 994–1006, 2021.
- Bia mitigation for machine learning classifiers: A comprehensive survey. arXiv preprint arXiv:2207.07068, 2022.
- Bias mitigation with aif360: A comparative study. In Norsk IKT-konferanse for forskning og utdanning, number 1, 2020.
- P. Janssen and B. M. Sadowski. Bias in algorithms: On the trade-off between accuracy and fairness. 2021.
- G. M. Johnson. Algorithmic bias: on the implicit biases of social technology. Synthese, 198(10):9941–9961, 2021.
- Decision theory for discrimination-aware classification. In 2012 IEEE 12th international conference on data mining, pages 924–929. IEEE, 2012.
- M. G. Kendall. Rank correlation methods. 1948.
- Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807, 2016.
- When mitigating bias is unfair: A comprehensive study on the impact of bias mitigation algorithms. arXiv preprint arXiv:2302.07185, 2023.
- A survey on datasets for fairness-aware machine learning. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(3):e1452, 2022.
- D. Lenders and T. Calders. Real-life performance of fairness interventions-introducing a new benchmarking dataset for fair ml. In Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, pages 350–357, 2023.
- S. Lowry and G. Macpherson. A blot on the profession. British medical journal (Clinical research ed.), 296(6623):657, 1988.
- The cost of fairness in binary classification. In Conference on Fairness, accountability and transparency, pages 107–118. PMLR, 2018.
- The unfairness of fair machine learning: Levelling down and strict egalitarianism by default. arXiv preprint arXiv:2302.02404, 2023.
- The case against accuracy estimation for comparing induction algorithms. In ICML, volume 98, pages 445–453, 1998.
- C. Reddy. Benchmarking bias mitigation algorithms in representation learning through fairness metrics. 2022.
- The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review, 2(1):1, 2020.
- A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2239–2248, 2018.
- P. Van der Laan. The 2001 census in the netherlands. In Conference the Census of Population, 2000.
- M. Veale and R. Binns. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2):2053951717743530, 2017.
- Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai. Computer Law & Security Review, 41:105567, 2021.
- Unlocking fairness: a trade-off revisited. Advances in neural information processing systems, 32, 2019.
- L. F. Wightman. LSAC National Longitudinal Bar Passage Study. LSAC research report series. 1998.
- Learning fair representations. In International conference on machine learning, pages 325–333. PMLR, 2013.
- Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340, 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.