Papers
Topics
Authors
Recent
Search
2000 character limit reached

Antisocial behavior towards large language model users: experimental evidence

Published 14 Jan 2026 in cs.AI, cs.CL, cs.CY, and econ.GN | (2601.09772v1)

Abstract: The rapid spread of LLMs has raised concerns about the social reactions they provoke. Prior research documents negative attitudes toward AI users, but it remains unclear whether such disapproval translates into costly action. We address this question in a two-phase online experiment (N = 491 Phase II participants; Phase I provided targets) where participants could spend part of their own endowment to reduce the earnings of peers who had previously completed a real-effort task with or without LLM support. On average, participants destroyed 36% of the earnings of those who relied exclusively on the model, with punishment increasing monotonically with actual LLM use. Disclosure about LLM use created a credibility gap: self-reported null use was punished more harshly than actual null use, suggesting that declarations of "no use" are treated with suspicion. Conversely, at high levels of use, actual reliance on the model was punished more strongly than self-reported reliance. Taken together, these findings provide the first behavioral evidence that the efficiency gains of LLMs come at the cost of social sanctions.

Summary

  • The paper demonstrates that LLM users face a 36% bonus destruction compared to 9.7% for non-users, evidencing marked antisocial punishment.
  • The paper shows that punishment escalates with the intensity of LLM use, validated by a two-phase experiment and mixed-effects beta regression models.
  • The paper reveals that self-disclosure of LLM use can backfire, as low or null self-reports incur harsher sanctions due to credibility issues.

Behavioral Evidence of Antisocial Punishment Toward LLM Users

Introduction

This study addresses a critical but underexplored dimension of AI deployment in human environments: the social costs associated with using LLMs for cognitive tasks. Moving beyond prior attitudinal research, the authors employ a rigorous experimental economic paradigm to quantify whether negative judgments about LLM users manifest as costly, punitive behaviors. Their investigation directly probes the extent to which efficiency gains from LLMs are counterbalanced by emergent social sanctions against their users—a key consideration for both organizational adoption and governance frameworks.

Experimental Paradigm and Methodology

The paper employs a two-phase experimental design involving a real-effort task, followed by a Money Burning Game variant. In Phase I, subjects performed an emoji-counting task either with no access to LLMs (control), with their actual LLM usage revealed, or with self-reported LLM usage. Phase II participants, exposed to exemplary performance profiles from Phase I (all perfect scorers), could expend their own monetary endowment to punish (i.e., reduce the rewards of) the Phase I targets. Critically, this punishment had no economic benefit for the decider, isolating antisocial or norm-enforcing motivations.

The dependent variable was the proportion of target earnings destroyed. The authors leveraged mixed-effects beta regression models to test their main hypotheses, including the baseline effect of LLM use, monotonicity with intensity of use, the signaling and credibility dynamics of self-reporting, and relevant interactions. Key demographic and psychometric variables—including technological affinity and habitual LLM use—were evaluated as potential moderators.

Main Findings

The empirical analysis reveals strong and nuanced patterns:

  • Punishment for LLM Use: Targets who exclusively relied on LLMs had 36% of their maximum possible bonus destroyed on average, far exceeding baseline sanctioning of non-LLM users (9.7%). This substantiates the claim that social disapproval of LLM reliance has real economic stakes.
  • Gradation with Intensity: Punishment increased monotonically with observable intensity of LLM use. Each incremental use of LLM support led to statistically significant, stepwise increases in the magnitude of antisocial destruction. This relationship is robust (OR for intensity: 1.39, p<0.001p<0.001), and persists for both actual and self-reported use, although it is significantly steeper for actual use.
  • Signaling and Credibility Gap: When LLM use had to be self-disclosed, reports of “no use” (null use) elicited harsher sanctions than verifiable non-use—demonstrating that mere declarations of human effort lack credibility in an environment with latent AI support. At the opposite extreme, high self-reported LLM use received somewhat less punishment than matched levels of actual use, suggesting partial mitigation from honest disclosure at high intensities. However, low or null levels of self-reported use are specifically penalized, reflecting ambient suspicion rather than reward for restraint.
  • Moderators and Social Cognition: Regular LLM users were less punitive overall, but technological affinity and knowledge about LLMs did not predict the propensity to punish. Moreover, perceptions of character—laziness and lack of competence—correlated strongly with both intensity of LLM use and with punitive actions, indicating that moralized attributional processes are central to observed behaviors.

Theoretical and Practical Implications

The findings integrate and expand theories of inequity aversion, deservingness, and signaling under informational asymmetry. The willingness of subjects to incur personal cost to punish LLM users confirms that the norm of effort-based deservingness persists robustly in the context of AI-enabled cognitive offloading. The results reinforce that attitudes about the legitimacy of human-machine collaboration are operationalized not just in rhetoric but in redistributive, real-world actions.

Practically, this work highlights the “double bind” confronting human agents in the age of advanced AI: use of LLMs incurs social sanction, while self-reported purity is met with skepticism and increased suspicion. These sanctions are especially problematic in settings where transparency around AI use is encouraged or required. The data reveal that transparency does not unambiguously build trust; rather, it frequently backfires, amplifying punitive responses in ambiguous cases.

For organizations and policy-makers, the implications are significant. Strategies that merely advocate for disclosure without accounting for underlying social-psychological dynamics may inadvertently increase distrust and punishment directed at LLM users. This challenges simplistic narratives around “responsible” or “ethical” AI disclosure and points to the need for more sophisticated, context-sensitive incentive and communication structures.

Limitations and Future Directions

While the experimental design affords high internal validity, generalizability is limited by the stylized nature of the task (emoji counting) and by the representativeness of the online Prolific sample. Real-world tasks involving creativity or strategic judgment may elicit distinct patterns. Additionally, the operationalization of antisocial response as financial punishment leaves open the question of how more diffuse or subtle forms of social exclusion, distrust, or devaluation manifest. Finally, the design did not permit silence or omission as a signaling strategy, potentially amplifying the observed penalty for explicit self-reported null use.

Future research should extend these behavioral paradigms to richer, more ecologically valid collaborative and evaluative settings, and explore interventions (e.g., structured disclosure protocols, norm reframing) that might reduce punitive responses while preserving transparency.

Conclusion

This study provides rigorous quantitative evidence that the efficiency gains offered by LLMs are offset by meaningful social punishment, which scales with intensity of use and is exacerbated by imperfect signaling conditions. The findings elucidate a core challenge for human-AI collaboration: neither high-effort restraint nor transparent LLM use protects individuals from liability to punitive social sanctions. Addressing this challenge will require integration of economic, psychological, and institutional insights to realign social norms, disclosure systems, and governance with the realities of human-AI cooperation (2601.09772).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 26 likes about this paper.