Authorship Drift: How Self-Efficacy and Trust Evolve During LLM-Assisted Writing

This lightning talk examines a rigorous empirical study investigating how self-efficacy and trust evolve during human-LLM collaborative writing. Through analysis of 302 participants' multi-turn writing sessions, the research reveals that self-efficacy and trust follow divergent trajectories, with declining self-efficacy strongly linked to increased editorial delegation, measurable authorship drift, and diminished agency. The findings challenge assumptions about static psychological states in AI-assisted work and highlight critical implications for designing systems that preserve user agency and authorship integrity.
Script
When you write with a language model, something subtle happens beneath the surface. Your confidence in your own abilities and your trust in the AI begin to shift in opposite directions, reshaping not just how you write, but whether the words on the page are truly yours.
So what exactly happens to writers during multi-turn collaboration with language models?
Building on this question, the researchers designed a controlled experiment where participants wrote essays while interacting with a language model. At every turn, they rated their confidence in completing the task alone and their trust in the AI, creating a detailed temporal map of psychological change.
The results exposed a striking divergence between these two psychological constructs.
The data revealed a fundamental asymmetry. While most participants maintained stable trust in the language model, over a quarter experienced declining confidence in their own writing abilities, with self-efficacy eroding turn by turn even as trust in the AI grew.
These psychological shifts manifested in concrete behaviors. Participants with declining self-efficacy increasingly asked the language model to directly edit their text rather than provide feedback, creating a behavioral marker of agency loss that could be detected in real time.
This behavioral shift carried a measurable cost to authorship itself.
The researchers quantified authorship through both computational measures and self-report. Participants with declining self-efficacy adopted significantly more language model content in their final essays and reported feeling less ownership and agency over the text they produced.
Interestingly, the analysis revealed that trust acts as a protective factor. Participants who trusted the language model more experienced slower self-efficacy decline, suggesting that the psychological dynamics of human-AI collaboration are more nuanced than simple substitution effects.
These findings point toward concrete design interventions. Future writing assistants could monitor prompt patterns to identify moments of agency loss and adaptively encourage feedback-seeking behaviors rather than editorial delegation, helping users maintain both confidence and authorship.
This research reveals that language model capabilities come with a hidden cost: as trust grows, self-efficacy quietly erodes, pulling authorship away from the human toward the machine. You can explore the full study and discover more cutting-edge research at EmergentMind.com.