Papers
Topics
Authors
Recent
Search
2000 character limit reached

Public Perceptions of Fairness Metrics Across Borders

Published 24 Mar 2024 in cs.AI | (2403.16101v3)

Abstract: Which fairness metrics are appropriately applicable in your contexts? There may be instances of discordance regarding the perception of fairness, even when the outcomes comply with established fairness metrics. Several questionnaire-based surveys have been conducted to evaluate fairness metrics with human perceptions of fairness. However, these surveys were limited in scope, including only a few hundred participants within a single country. In this study, we conduct an international survey to evaluate public perceptions of various fairness metrics in decision-making scenarios. We collected responses from 1,000 participants in each of China, France, Japan, and the United States, amassing a total of 4,000 participants, to analyze the preferences of fairness metrics. Our survey consists of three distinct scenarios paired with four fairness metrics. This investigation explores the relationship between personal attributes and the choice of fairness metrics, uncovering a significant influence of national context on these preferences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. Machine bias. In Ethics of data and analytics, pp.  254–264. 2022.
  2. Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAT, pp.  77–91, 2018.
  3. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
  4. Soliciting stakeholders’ fairness notions in child maltreatment predictive systems. In CHI, pp.  1–17, 2021.
  5. Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
  6. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. arXiv preprint arXiv:1408.6491, 2014.
  7. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 world wide web conference, pp.  903–912, 2018.
  8. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016.
  9. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In FAccT, pp.  392–402, 2020.
  10. Counterfactual fairness. Advances in neural information processing systems, 2017.
  11. A survey on bias and fairness in machine learning. ACM computing surveys, 54(6):1–35, 2021.
  12. Exploring user perceptions of discrimination in online targeted advertising. In USENIX Security, pp.  935–951, 2017.
  13. On fairness and calibration. NeurIPS, 2017.
  14. Measuring non-expert comprehension of machine learning fairness metrics. In ICML, 2020.
  15. Human perceptions of fairness: a survey experiment. In Wirtschaftsinformatik, 2023.
  16. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In KDD, pp.  2459–2468, 2019.
  17. A qualitative exploration of perceptions of algorithmic fairness. In CHI, pp.  1–14, 2018.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.