The Initial Screening Order Problem
Abstract: We investigate the role of the initial screening order (ISO) in candidate screening. The ISO refers to the order in which the screener searches the candidate pool when selecting $k$ candidates. Today, it is common for the ISO to be the product of an information access system, such as an online platform or a database query. The ISO has been largely overlooked in the literature, despite its impact on the optimality and fairness of the selected $k$ candidates, especially under a human screener. We define two problem formulations describing the search behavior of the screener given an ISO: the best-$k$, where it selects the top $k$ candidates; and the good-$k$, where it selects the first good-enough $k$ candidates. To study the impact of the ISO, we introduce a human-like screener and compare it to its algorithmic counterpart, where the human-like screener is conceived to be inconsistent over time. Our analysis, in particular, shows that the ISO, under a human-like screener solving for the good-$k$ problem, hinders individual fairness despite meeting group fairness, and hampers the optimality of the selected $k$ candidates. This is due to position bias, where a candidate's evaluation is affected by its position within the ISO. We report extensive simulated experiments exploring the parameters of the best-$k$ and good-$k$ problems for both screeners. Our simulation framework is flexible enough to account for multiple candidate screening tasks, being an alternative to running real-world procedures.
- Do Recommender Systems Manipulate Consumer Preferences? A Study of Anchoring Effects. Inf. Syst. Res. 24, 4 (2013), 956–975.
- Susan Athey and Glenn Ellison. 2011. Position Auctions with Consumer Search. The Quarterly Journal of Economics 126, 3 (2011), 1213–1270.
- Ricardo Baeza-Yates. 2018. Bias on the web. Commun. ACM 61, 6 (2018), 54–61.
- Eszter Bokányi and Anikó Hannák. 2020. Understanding inequalities in ride-hailing services through simulations. Scientific reports 10, 1 (2020), 1–11.
- Z. I. Botev. 2017. The normal law under linear restrictions: simulation and estimation via minimax tilting. Journal of the Royal Statistical Society Series B 79, 1 (2017), 125–148.
- SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments. In FAT. ACM, 150–159.
- Fairness in agreement with European values: An interdisciplinary perspective on ai regulation. In AIES. ACM, 107–118.
- 23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction. In CHI. ACM, 503.
- Gonçalo Carriço. 2018. The EU and artificial intelligence: A human-centred perspective. European View 17, 1 (2018), 29–36.
- Using Survival Models to Estimate User Engagement in Online Experiments. In WWW. ACM, 3186–3195.
- An experimental comparison of click position-bias models. In WSDM. ACM, 87–94.
- Fairness through awareness. In ITCS. ACM, 214–226.
- AI-Moderated Decision-Making: Capturing and Balancing Anchoring Bias in Sequential Decision Tasks. In CHI. ACM, 161:1–161:9.
- Modelling Dependence with Copulas and Applications to Risk Management. In Handbook of Heavy Tailed Distributions in Finance, Svetlozar T. Rachev (Ed.). Vol. 1. North-Holland, Amsterdam, 329–384.
- Fairness and Bias in Algorithmic Hiring. CoRR abs/2309.13933 (2023).
- Optimal aggregation algorithms for middleware. J. Comput. Syst. Sci. 66, 4 (2003), 614–656.
- Thomas S Ferguson. 1989. Who solved the secretary problem? Statist. Sci. 4, 3 (1989), 282–289.
- A Comparative Study of Click Models for Web Search. In CLEF (Lecture Notes in Computer Science, Vol. 9283). Springer, 78–90.
- An Agent-based Model to Evaluate Interventions on Online Dating Platforms to Decrease Racial Homogamy. In FAccT. ACM, 412–423.
- Recommender Systems - An Introduction. Cambridge University Press.
- Accurately Interpreting Clickthrough Data as Implicit Feedback. SIGIR Forum 51, 1 (2017), 4–11.
- Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context. CoRR abs/2006.09663 (2020).
- Daniel Kahneman. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Noise: A Flaw in Human Judgment. William Collins.
- Systematizing Audit in Algorithmic Recruitment. Journal of Intelligence 9, 3 (2021).
- Alina Köchling and Marius Claus Wehner. 2020. Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research 13, 3 (2020).
- Anay Mehrotra and L Elisa Celis. 2021. Mitigating bias in set selection with noisy protected attributes. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 237–248.
- Human-in-the-loop machine learning: A state of the art. Artif. Intell. Rev. 56, 4 (2023), 3005–3054.
- In Google We Trust: Users’ Decisions on Rank, Position, and Relevance. J. Comput. Mediat. Commun. 12, 3 (2007), 801–823.
- 30 Million Canvas Grading Records Reveal Widespread Sequential Bias and System-Induced Surname Initial Disparity. (2023).
- Elena Pisanelli. 2022. Your resume is your gatekeeper: Automated resume screening as a strategy to reduce gender gaps in hiring. Economics Letters 221 (2022), 110892.
- Fairness in rankings and recommendations: an overview. VLDB J. 31, 3 (2022), 431–458.
- R Core Team. 2024. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
- Mitigating bias in algorithmic hiring: evaluating claims and practices. In FAT*. ACM, 469–481.
- An external stability audit framework to test the validity of personality prediction in AI hiring. Data Min. Knowl. Discov. 36, 6 (2022), 2153–2193.
- Predicting clicks: estimating the click-through rate for new ads. In WWW. ACM, 521–530.
- Can We Trust Fair-AI?. In AAAI. AAAI Press, 15421–15430.
- Thomas C Schelling. 1971. Dynamic models of segregation. Journal of Mathematical Sociology 1, 2 (1971), 143–186.
- Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD. ACM, 2219–2228.
- A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability. Patterns 3, 2 (2022), 100425.
- The Promise and The Peril: Artificial Intelligence and Employment Discrimination. U. Miami Law Review 77, 1 (2022), 3.
- Online Set Selection with Fairness and Diversity Constraints. In EDBT. OpenProceedings.org, 241–252.
- Making a Pecan Pie: Understanding and Supporting The Holistic Review Process in Admissions. Proc. ACM Hum. Comput. Interact. 2, CSCW (2018), 169:1–169:22.
- Amos Tversky and Daniel Kahneman. 1974. Judgment Under Uncertainty: Heuristics and Biases. Science 185, 4157 (1974), 1124–1131.
- People versus machines: introducing the HIRE framework. Artif. Intell. Rev. 56, 2 (2023), 1071–1100.
- Building and Auditing Fair Algorithms: A Case Study in Candidate Screening. In FAccT. ACM, 666–677.
- Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In SSDBM. ACM, 22:1–22:6.
- FA*IR: A Fair Top-k Ranking Algorithm. In CIKM. ACM, 1569–1578.
- Fairness in Ranking, Part I: Score-Based Ranking. ACM Comput. Surv. 55, 6 (2023), 118:1–118:36.
- Fairness in Ranking, Part II: Learning-to-Rank and Recommender Systems. ACM Comput. Surv. 55, 6 (2023), 117:1–117:41.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.