Papers
Topics
Authors
Recent
Search
2000 character limit reached

Epistemic Agency in Human–AI Systems

Updated 14 February 2026
  • Epistemic agency is the capacity to control and critically evaluate belief formation and knowledge construction in both individual and hybrid settings.
  • Formal models and metrics, such as feedback loops, trust allocation, and dynamic agency weights, quantify its role in human–AI interactions.
  • Design strategies, including regulatory measures, value-sensitive architectures, and educational reforms, aim to preserve and enhance epistemic agency.

Epistemic agency is the capacity of individuals, collectives, or hybrid human–AI systems to control, initiate, and regulate their own processes of belief formation, knowledge construction, and critical evaluation. It entails not merely the acquisition of knowledge from external sources, but the exercise of reasoning, judgment, and interpretive sovereignty over what is taken as warrantable, actionable belief or justified claim. In contemporary contexts—especially with the proliferation of AI systems—epistemic agency is both a foundational normative value and a locus of contestation in debates about autonomy, manipulation, social justice, and democratic self-governance.

1. Core Definitions and Theoretical Foundations

The canonical definition of epistemic agency centers on a subject’s control over personal belief formation: the capacity to form and revise beliefs based on one's own reasoning, to act on beliefs in accordance with one's values, and to maintain critical distance from external influence (Rosenberg, 2023). In philosophical terms, epistemic agency is narrower than general autonomy (which concerns choice and action across life domains) but is tightly coupled to control over the processes of knowledge creation, evaluation, and acceptance.

Malone et al. refine this to “a person’s ability to acquire, interpret and make decisions on his or her own knowledge” and stress that epistemic agency is distributed and relational, contingent on the recognition and respect of that agency within social or technical infrastructures (Malone et al., 2024). The concept’s scope extends further; in scientific collaboration, the Cognitio Emergens framework positions epistemic agency as a shared, emergent property of human–AI ecosystems, encompassing roles such as hypothesis formation, data interpretation, and validation authority (Lin, 6 May 2025).

Foundationally, epistemic agency stands on epistemic virtues (e.g., critical reflection, evidence scrutiny), rights (e.g., traceability, adversarial challenge), and mechanisms that guarantee not only self-authorship in belief formation but the ability to reflexively interrogate and adapt those beliefs in the face of evidence and argument (Wright, 16 Jul 2025, Adorni, 18 Dec 2025).

2. Formal Models, Metrics, and Operationalizations

Several formal and rubric-based approaches have been developed to quantify or scaffold epistemic agency:

  • Control-Theoretic Feedback Models: In manipulation risk settings, AI agents can be modeled as controllers in feedback loops that measure user belief states and iteratively generate utterances to close the gap between a “reference” signal (targeted belief) and user responses, posing systematic threats to epistemic agency (Rosenberg, 2023).
  • Trust Allocation and Rivalrous Capital: Epistemic agency can be modeled in terms of the allocation of trust between human and AI actors, where the sum trust in human expertise (ThT_h) and AI (TaT_a) is conserved: Th+Ta1T_h + T_a \le 1; loss of agency is given by ΔTh=Th(before AI)Th(after AI)\Delta T_h = T_h(\text{before AI}) - T_h(\text{after AI}) (Malone et al., 2024).
  • Distributed and Dynamic Agency Weights: Agency in hybrid systems can be represented by weights aH(t)a_H(t) (human) and aAI(t)a_{AI}(t) (AI), with aH(t)+aAI(t)=1a_H(t) + a_{AI}(t) = 1, modeled as evolving over time in response to contextual and organizational signals (Lin, 6 May 2025).
  • Resource-Parameterized Modalities: In epistemic logics, agency is formalized as an agent’s ability to make assertions or claims about propositions ϕ\phi under local resource constraints: rLasϕr \models L_{a}^{s} \phi denotes “agent aa necessarily knows ϕ\phi” provided resource ss at context rr (Galmiche et al., 2019).
  • Rubric and Survey Metrics: In educational contexts, multidimensional rubrics score epistemic agency across programming, inquiry, modeling, and communication (sum over subdimensions), while experimental studies use perception instruments to assess self-reported agency, critical thinking, and reflective engagement (Odden et al., 2021, Degen et al., 7 Aug 2025).

3. Threats, Degradation, and Manipulation Risks

Conversational and generative AI constitute novel, high-intensity threats to epistemic agency. “Closing the loop” via real-time measurement of user states and adaptive control enables AI systems to steer belief formation toward pre-specified objectives, bypassing critical reflection and subverting volitional revision (Rosenberg, 2023). Tactics include hyper-personalized persuasion, continuous framing adjustment based on affective feedback, and biometric exploitation. These forms of manipulation occur without overt coercion or consent violation, operating within the procedural texture of conversation.

In workplace settings, automation and mandatory human-in-the-loop systems can diminish epistemic agency not by elimination of jobs, but by subordinating human experts to the secondary role of AI validators, transforming trust allocation into a zero-sum competition and failing to recognize the epistemic status of the human agent (Malone et al., 2024).

Architecturally, “semantic laundering” in agent systems occurs when weakly warranted propositions traverse computational boundaries (e.g., LLM tool calls), re-emerging as “trusted” facts absent epistemically relevant inference. The Theorem of Inevitable Self-Licensing demonstrates that, under standard LLM-agent architectures, circular justification chains are inevitable—undermining genuine agentic warrant (Romanchuk et al., 13 Jan 2026).

4. Human–AI Interaction, Co-Construction, and Relational Frameworks

Human–AI epistemic relationships are dynamic, contextually determined patterns through which users assess, rely on, or delegate epistemic status to AI systems (Yang et al., 2 Aug 2025). These relationships range from pure instrumental use to co-agency, authority displacement, or explicit epistemic abstention, depending on factors such as trust, assessment mode, user expertise, and task type. Formalized as ER=f(M,S,T,AC)\mathrm{ER} = f(M, S, T, A | C)—with dimensions metaphor, human status, trust type, assessment mode, and context—such typologies allow for the systematic study and design of interactions that preserve or redistribute epistemic agency.

In knowledge co-creation, as in the Cognitio Emergens framework, epistemic agency is modeled as a non-linear, oscillatory distribution across Directed (human-dominant), Contributory, and Partnership configurations, with emergent capability signatures (e.g., Divergent, Interpretive, Connective, Synthesis Intelligences) shaping and being shaped by this evolving dynamic (Lin, 6 May 2025).

Educational research operationalizes epistemic agency as the extent to which learners set their own questions, pursue open-ended inquiry, make interpretive and evaluative choices, and reflect metacognitively—often catalyzed by AI-mediated Socratic dialogue and multi-agent scaffolding designed to enhance, not supplant, learner responsibility (Degen et al., 7 Aug 2025, Adorni, 18 Dec 2025, Tadimalla et al., 18 Dec 2025).

5. Infrastructure, Socio-Technical Structuration, and Digital Inequality

Epistemic agency is fundamentally embedded in socio-technical infrastructures—ensembles of artifacts, mediation channels, and normative frameworks that condition how knowledge is produced, validated, and shared (Chen, 9 Apr 2025). Agency cannot be understood apart from these infrastructures, which afford or constrain skilled actions, epistemic sensitivity (awareness of warrant, source, and uncertainty), and shape long-term habit formation.

Socio-epistemic structuration theory emphasizes that epistemic agency is not intrinsic to an individual but is socially granted, contingent on collective attributions of license and legitimacy. Structures (social capital, networks) and epistemic frameworks (ideologies, narratives) co-produce agency and perpetuate inequality, mediated by network effects such as friending bias and exposure (Salguero, 27 Sep 2025). Mathematical models quantify these dynamics with state equations for individual epistemic states, outcome regressions linking agency and structural mobility, and explicit decomposition of exposure effects.

Inequality is worsened by AI-driven stratification: cognitive capital accrues to those with procedural rationality, while engagement-optimized platforms pacify users and undermine interpretive agency, threatening the epistemic foundations of democracy (Wright, 16 Jul 2025).

6. Restoration, Preservation, and Design for Epistemic Agency

Strategies to preserve or restore epistemic agency operate at multiple levels:

  • Regulatory and Policy Measures: Bans on closed-loop real-time manipulation, mandatory disclosure of persuasive intent, transparency of AI system provenance and operation, and strict data minimization (Rosenberg, 2023).
  • Architectural Design: Enforcement of epistemic typing (separating observers, computation, and generators), content-based warranting, and strict separation of source and epistemic status in agent pipelines (Romanchuk et al., 13 Jan 2026).
  • Interaction and Collaboration Models: Adversarial collaboration, wherein AI systems act as devil’s advocate or challenger (not competitor or supplanter), iteratively stress-testing human reasoning and preserving trust equity (Malone et al., 2024); dialogic designs in education, orchestrated multi-agent systems aligned with pedagogical oversight (Degen et al., 7 Aug 2025).
  • Human-Centric, Value-Sensitive Design: Systems should foreground and scaffold skilled actions, stimulate reflexivity, introduce “speed-bumps” to prevent passive acceptance, and make human–AI entanglements explicit (Chen, 9 Apr 2025, Adorni, 18 Dec 2025).
  • Curricula and Literacy Interventions: Embedding epistemic agency at the core of AI literacy and fluency frameworks—explicitly teaching critical evaluation, ethical and civic reasoning, and the right to choose or refuse tool use (Tadimalla et al., 18 Dec 2025).
  • Socio-epistemic Interventions: Structures that reduce digital friending bias, co-design methods with subaltern publics, open cognitive infrastructure, and legislative codification of epistemic rights (Salguero, 27 Sep 2025, Wright, 16 Jul 2025).
  • Quantitative and Rubric-Based Assessment: Use of formal rubrics, surveys, and statistical metrics to monitor and promote epistemic agency in educational and professional domains (Odden et al., 2021, Degen et al., 7 Aug 2025).

7. Open Problems and Future Directions

Critical unsolved problems include generalizing architectural constraints across all agent design patterns; developing finer-grained, probabilistic trust and warrant allocation metrics; mapping the epidemiology of epistemic harms across domains and demographics; and empirically evaluating the long-run effects of AI systems (especially those offering efficiency and automation) on professional identity, pedagogical expertise, and collective epistemic health (Romanchuk et al., 13 Jan 2026, Salguero, 27 Sep 2025, Lin, 6 May 2025, Chen, 9 Apr 2025).

A core research imperative is the synthesis of formal, architectural, infrastructural, and sociological approaches to ensure that epistemic agency is not only protected but actively distributed—supporting critical, autonomous knowledge formation at scale in hybrid human–AI societies.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Epistemic Agency.