Papers
Topics
Authors
Recent
Search
2000 character limit reached

Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints

Published 13 Feb 2024 in cs.CY | (2402.08171v4)

Abstract: What counts as legitimate AI ethics labor, and consequently, what are the epistemic terms on which AI ethics claims are rendered legitimate? Based on 75 interviews with technologists including researchers, developers, open source contributors, and activists, this paper explores the various epistemic bases from which AI ethics is discussed and practiced. In the context of outside attacks on AI ethics as an impediment to "progress," I show how some AI ethics practices have reached toward authority from automation and quantification, and achieved some legitimacy as a result, while those based on richly embodied and situated lived experience have not. This paper draws together the work of feminist Anthropology and Science and Technology Studies scholars Diana Forsythe and Lucy Suchman with the works of postcolonial feminist theorist Sara Ahmed and Black feminist theorist Kristie Dotson to examine the implications of dominant AI ethics practices. By entrenching the epistemic power of quantification, dominant AI ethics practices -- employing Model Cards and similar interventions -- risk legitimizing AI ethics as a project in equal and opposite measure to which they marginalize embodied lived experience as a legitimate part of the same project. In response, I propose humble technical practices: quantified or technical practices which specifically seek to make their epistemic limits clear in order to flatten hierarchies of epistemic power.

Citations (4)

Summary

  • The paper offers an empirical mapping of AI ethics labor hierarchies using 75 in-depth interviews, highlighting the marginalization of situated expertise.
  • The study reveals that quantitative interventions, like Model Cards and Datasheets, legitimize AI ethics only when formalized within engineering frameworks.
  • It advocates for humble technical practices that redistribute epistemic power by integrating lived experience into AI system governance.

Epistemic Power and Legitimacy in AI Ethics Labor

Introduction

"Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints" offers an incisive empirical and theoretical analysis of the hierarchies underpinning the legitimacy of AI ethics labor. Drawing from 75 in-depth interviews with technologists across research, development, open-source communities, and activism, the paper articulates how epistemic frameworks rooted in quantification and automation systematically privilege certain forms of AI ethics work, while delegitimizing others grounded in lived, situated experiences. This analysis is scaffolded by feminist STS, postcolonial, and Black feminist theorists, explicitly interrogating how epistemic power manifests in AI labor and proposing directions for reconfiguring these hierarchies.

Epistemic Hierarchies and the Status of AI Ethics Work

A core contribution of the paper is its detailed empirical mapping of how AI ethics labor is ranked relative to technical engineering in the cultures of AI. Ethics work is generally characterized as lower status, with participants describing it as a "chore" that is frequently delegated, gendered, and stripped of professional prestige—a finding paralleling prior observations on the devaluation of feminized labor within technical domains. This marginalization is compounded for women, non-binary, and minoritized workers, who often bear the burden of defending and legitimizing their participation in ethics roles—sometimes by translating experience-based concerns into business or technical language, often without success.

Quantitatively-oriented interventions, such as Model Cards and Datasheets, have gained institutional traction precisely because they fit the “objective” modes of reasoning valorized in technical culture. The paper highlights how standardizing and automating ethics routines—model reporting, fairness checklists, external evidence—serves to legitimate ethics labor so long as it inhabits the epistemic style of engineering: externally validated, repeatable, and formally decidable. Case studies in the paper reveal engineers actively seeking to automate ethical reporting, reducing subjective ambiguity, and rendering ethical questions (such as fairness or harm) into instrumentally "fixable" properties. The rationale is not the trivialization of ethics per se, but that only through automation, quantification, and reference to authoritative external sources do ethical concerns become institutionally actionable.

The Delegitimization of Located Complaints

Notably, the paper foregrounds how complaints or interventions originating in lived, embodied, and situated experiences are rendered illegitimate or “non-complaints” within this epistemic regime. Participants who raised issues based on their personal or community experience (e.g., advocates for interface inclusivity, or those referencing harms to minoritized groups) frequently had their concerns dismissed unless they could be formalized in external evidence or quantified business imperatives. The persistent recourse to colorblind “neutral” solutions (e.g., defaulting to grey VR hands rather than customizable skin tones) exemplifies how attempts to flatten specific, located experience into standardized metrics hide rather than address sociotechnical inequities.

This dynamic enacts what feminist and postcolonial theorists have described as epistemic oppression, wherein the dominant systems filter which knowledge forms count as legitimate, compelling minoritized actors to adapt their complaints into the forms intelligible to the hegemonic order. The interrogation of how model reporting and ethics quantification practices co-produce their own epistemic boundaries is a salient theoretical contribution, showing that institutionalization of ethics via quantifiable standards can further marginalize alternative knowledges and shut down contestation.

Constructing Alternative Epistemic Practices

A significant thread in the paper examines attempts to construct alternative spaces for ethics discourse and action, especially those that explicitly center embodied and situated experience. Interview data indicates that in environments where shared experience of marginalization is foregrounded, practitioners feel enabled to ground ethics discussions in context-specific, lived realities, countering the abstraction of technocratic discourse. However, the paper is clear that these pockets remain marginal to the dominant epistemic regime, and when encountered in mainstream settings, experiential and community knowledge continues to face epistemic violence and exclusion.

This section also highlights how efforts to design AI systems for sensitive applications (e.g., health technologies) can benefit from proximate, user-centered engagements, yet even these are routinely reframed within a business or instrumental logic, ultimately subjugating lived knowledge to market imperatives. While alternative discursive and participatory forms exist, they are easily reabsorbed or obfuscated by the dominant, positivist objectivity that prevails within technical AI cultures.

Toward Humble Technical Practices

The culmination of the paper’s argument is a formal articulation of “humble technical practices.” Rather than rejecting quantification or technical rigor wholesale, the author advocates for technical practices that make explicit their epistemic limits, gesturing toward pluralism and refusing the view-from-nowhere objectivity that currently confers exclusive legitimacy. This is a concrete instantiation of Haraway’s situated knowledge and Suchman's located accountability: the call is for technical artifacts, reports, and practices to critically reflect on what cannot be captured by metrics or automation, and to overtly platform and amplify the knowledges derived from lived experience, particularly those of marginalized stakeholders.

The proposal for humble technical practice extends prior work on critical technical practices and participatory design, but specifically centers the redistribution of epistemic power as an explicit project. This model requires those with technical, economic, or institutional power to actively cede epistemic authority—elevating, resourcing, and legitimating lived experience both within and outside technical institutions. Rather than perfunctory limitations sections or token participatory panels, the recommendation is for deep epistemic humility, including direct citation and platforming of non-academic and activist expertise, community-led audits, and structurally re-constituting the terms by which AI ethics claims are rendered legitimate.

Theoretically, this position is rooted in Black feminist and decolonial thought on epistemic oppression and the intractability of reconfiguring hierarchical knowledge systems. The author draws from Dotson, Lorde, Ahmed, and others to argue that technical solutionism, even in the name of ethics, will always circumscribe itself unless it radically pluralizes what counts as valid knowledge in practice.

Implications and Prospects for Future AI Ethics Practice

Practically, the paper’s analysis warns that the convergence of AI ethics into standardized, quantified, and automatable practices—while achieving partial legitimacy within engineering cultures—continues to entrench exclusionary epistemic hierarchies. Existing and future regulatory frameworks (e.g., the EU AI Act, NIST RMF) risk the same fate if they reduce sociotechnical ethics to compliance documents and metrics, sidelining the necessary contestation and lived realities of those affected. Recent work reinforces the necessity of integrating critique that foregrounds power, lived experience, and intersectionality over technical recoding of ethical principles [raji2021you, birhane2022forgottena, Kong_2022].

The concept of humble technical practices opens a new axis for AI-system evaluation and governance: beyond functional auditing and technical fairness, future research and institutional design must integrate mechanisms for reflexively examining and redistributing epistemic power. This includes fostering interdisciplinary, activist, and lay collaboration as core to development lifecycles, designing for contestation rather than consensus, and instantiating ongoing accountability to those with firsthand knowledge of system harms.

Looking forward, as AI systems are further embedded in governance, labor, and everyday life, the pluralization of epistemic authority in ethics labor becomes ever more urgent. Research programs that formalize, operationalize, and evaluate humble technical practices—moving beyond theoretical critique to experimental implementation—will be vital to expanding the boundaries of legitimate AI ethics.

Conclusion

This paper provides an empirically and theoretically grounded critique of the epistemic regimes governing AI ethics labor, boldly asserting that the authority conferred by quantification and automation comes at the cost of marginalizing embodied and situated knowledges. By situating contemporary techno-solutionist practices within a broader analysis of epistemic power, the work reframes the problem of ethics in AI not simply as a deficit of principles or oversight, but as a function of entrenched epistemic hierarchies. The proposal of humble technical practices is positioned as one necessary intervention for redressing these hierarchies, advocating for epistemological pluralism as the foundation of a more equitable and reflexive AI ethics.


Reference:

Widder, David Gray. "Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints" (2402.08171)

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 1 like about this paper.