Detect Off-Platform Tool Use Triggered by In-Task Ratings

Determine whether participants in the LLM-assisted argumentative essay writing study shifted part of their work to external tools outside the instrumented platform due to the in-the-moment self-efficacy and trust rating process.

Background

To capture turn-level dynamics, the study required participants to provide self-efficacy and trust ratings before each new prompt. A subset reported the rating process as distracting, raising the possibility that they may have moved portions of their writing workflow to external tools such as other LLMs.

Because the instrumentation logged only in-platform activity, the authors explicitly state they cannot tell whether such off-platform usage occurred, leaving an unresolved question regarding data completeness and interaction validity.

References

Specifically, participants who experienced the ratings as distracting may have partially shifted their work to external tools (e.g., other LLMs), but because our logging was limited to in-platform events (Section~\ref{method:interface}), we cannot determine whether this occurred.

Authorship Drift: How Self-Efficacy and Trust Evolve During LLM-Assisted Writing  (2602.05819 - Park et al., 5 Feb 2026) in Section 6 (Limitations and Future Work)