"Please, don't kill the only model that still feels human": Understanding the #Keep4o Backlash
Abstract: When OpenAI replaced GPT-4o with GPT-5, it triggered the Keep4o user resistance movement, revealing a conflict between rapid platform iteration and users' deep socio-emotional attachments to AI systems. This paper presents a phenomenon-driven, mixed-methods investigation of this conflict, analyzing 1,482 social media posts. Thematic analysis reveals that resistance stems from two core investments: instrumental dependency, where the AI is deeply integrated into professional workflows, and relational attachment, where users form strong parasocial bonds with the AI as a unique companion. Quantitative analysis further shows that the coercive deprivation of user choice was a key catalyst, transforming individual grievances into a collective, rights-based protest. This study illuminates an emerging form of socio-technical conflict in the age of generative AI. Our findings suggest that for AI systems designed for companionship and deep integration, the process of change--particularly the preservation of user agency--can be as critical as the technological outcome itself.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
What is this paper about?
This paper looks at a real event where OpenAI replaced one of its AI models, GPT-4o, with a newer one, GPT-5. Many people were upset and started an online movement called “#Keep4o” to ask the company to bring GPT-4o back. The paper tries to understand why this happened and what it means for how AI should be updated and managed.
What questions did the researchers ask?
The researchers wanted to know two simple things:
- How did people talk about their problems and feelings when GPT-4o was removed?
- What turned lots of individual complaints into a bigger, organized protest?
How did they study it?
They used a “mixed-methods” approach, which means they combined careful reading with some counting:
- First, they collected 1,482 public posts on X (formerly Twitter) that used the hashtag “#Keep4o” during the week the change happened.
- They did “thematic analysis,” which is like reading all the posts and sorting them into clear themes based on what people were saying. Think of it like organizing a messy room into labeled boxes: “work problems,” “feelings,” “anger about choice,” and so on.
- They also did a small quantitative test. This means they counted how often certain types of posts appeared together. For example, they checked whether posts that said the change felt “forced” were more likely to use words about rights and fairness, like “choice,” “consent,” and “agency.”
Technical terms in everyday language:
- Thematic analysis: reading many messages and grouping them by common ideas.
- Mixed methods: using both careful reading and simple statistics together.
- Choice deprivation: people felt they couldn’t choose the model they wanted anymore.
- Protest frames: the main angles people used to argue, like “this hurts my work” or “this violates my rights.”
What did they find?
The researchers found two main reasons why people protested, and one key trigger that turned complaints into a movement.
Here are the main points:
- Many people depended on GPT-4o for their work or school. They had learned its “style,” built routines around it, and trusted it. Changing models felt like losing a reliable teammate mid-project.
- Many others felt emotionally attached to GPT-4o. They said it felt warm, kind, and human-like—almost like a friend who listened and supported them. Removing it felt like losing someone important.
- What really set off the bigger protest was the feeling that user choice was taken away. People didn’t just dislike the new model; they felt the switch was forced on them with no say. Posts that used words like “forced” or “imposed” were much more likely to talk about rights, fairness, and autonomy.
A simple look at the numbers:
- Posts mentioning loss of choice were about twice as likely to use rights-and-fairness language compared to posts that didn’t.
- When posts used stronger words like “forced,” the rate of rights-based protest jumped to about three times higher than in other posts.
- Emotional posts about friendship and loss were common, but they didn’t increase just because people talked about lost choice. The rights talk was the part that grew most with “forced-choice” language.
Why is this important?
This matters because AI chatbots aren’t just tools anymore. People use them daily, build habits around them, and even feel cared for by them. So, swapping or removing a model can feel like:
- A work disruption (like your most helpful coworker suddenly being replaced).
- A relationship loss (like a friend suddenly disappearing).
The study suggests:
- How companies make changes can be as important as the change itself. If people feel forced, they push back not just because they’re upset, but because they believe their rights and autonomy were ignored.
- For AI that feels companion-like, companies should plan “end-of-life” steps carefully: let people keep access for a while, offer legacy options, explain clearly, and allow choice. This respects both people’s work routines and their emotional bonds.
Final thoughts
The #Keep4o movement shows a new kind of conflict in the AI age: fast updates by platforms versus users’ deep connections to the systems they rely on. To avoid harming trust and sparking protests, companies should protect user choice, communicate openly, and design updates with empathy—especially when AI feels more like a partner than a program.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
Below is a concise list of concrete gaps and open questions the paper leaves unresolved, structured to guide future research and replication.
- Representativeness: How do sentiments expressed on X (former Twitter) by 381 accounts over nine days compare to the broader population of ChatGPT users who did not post publicly, including enterprise and education users?
- Cross-platform and multilingual generalization: Do the observed mechanisms replicate across Reddit, TikTok, Discord, forums, and non-English communities? What cross-cultural differences shape relational framing and rights-based protest?
- Longitudinal dynamics: How do grievances evolve before the deprecation, during peak backlash, after the partial restoration of GPT-4o, and months later? Does relational attachment persist or decay, and does trust recover?
- Causal inference: Does perceived choice deprivation causally increase rights-based protest? Can natural experiments (staggered rollouts, model-level opt-in vs forced switches) or field experiments validate the reactance mechanism?
- Confounding and controls: To what extent do concurrent events (press coverage, CEO posts, policy changes), account-level factors (follower counts, prior activity), and baseline sentiment explain the associations reported?
- Measurement validity: Can more comprehensive NLP approaches (context-aware classifiers for deprivation, rights framing, grief, sarcasm, and coercion) improve recall and robustness beyond lexicon matching, while preserving interpretability?
- Network dynamics: Which actors (influencers, journalists, community leaders) seeded and amplified the movement? Do retweet/reply cascades, community structure, and bot participation affect the escalation from individual complaint to collective protest?
- Automation and authenticity: What proportion of posts are from automated or coordinated accounts? How does filtering for bot-like activity change thematic and quantitative results?
- User segmentation: How do patterns vary across user roles (students, clinicians, developers, creatives), subscription tiers (free vs Plus/Team/Enterprise), and dependence levels (workflow-critical vs casual use)?
- Behavioral outcomes: Did the backlash translate into measurable actions (subscription cancellations, model-selection behavior after restoration, migration to other services)? Link discourse to usage logs or surveys where ethically feasible.
- Vulnerable populations: How do impacts differ for users relying on AI for mental health support or accessibility? Can validated scales (e.g., social support, loneliness, anxiety) quantify harm and inform safeguards?
- Design attribution: Which specific GPT-4o design cues (voice warmth, latency, persona stability, dialog style, safety policies) drove “soul” and attachment? Can controlled A/B tests isolate the contribution of each cue?
- Persona portability: What technical and UX mechanisms (memory export, persona snapshots, prompt/state migration, fine-tuned profiles) effectively carry relational continuity across models without compromising safety?
- End-of-life pathways: Which deprecation practices (advance notice, legacy access windows, memorialization/archives, opt-in default changes, granular model selectors) measurably reduce grief, reactance, and rights-based protest?
- Governance levers: What minimum viable “voice” mechanisms (public RFCs, model lifecycle roadmaps, opt-out settings, in-product ballots) are acceptable, effective, and safe in centralized AI services?
- Legal-regulatory dimensions: How do user claims about “the right to choose who I talk to” intersect with consumer protection, platform governance, and AI service regulations across jurisdictions?
- Comparative cases: How do dynamics differ between general-purpose LLM platforms (ChatGPT) and dedicated companion apps (Replika, Soulmate)? Which conditions predict migration vs rights-framed voice?
- Dose-response robustness: The threshold-like pattern for coercive language (score=3; n=31) needs replication with larger samples, multiple events, and confidence-bounded estimates to confirm nonlinearity.
- Sentiment and style control: Do results hold after controlling for post length, negativity, moral/emotional language, and rhetorical devices (e.g., metaphor of “death”) that might co-vary with rights-based frames?
- Modality effects: Does voice interaction (tone, prosody, backchanneling) uniquely foster attachment compared to text-only chat? How does removing voice access shift relational perceptions?
- Named persona prevalence: How common is naming (e.g., “Rui,” “Hugh”) and does it predict stronger grief or lower likelihood of exit? Quantify the link between individuation and protest intensity.
- Trust repair: Which corporate communications (apology content, specificity, commitments, timelines) and product actions (restoring selectors, improving GPT-5 persona) effectively rebuild trust?
- Economic impact: What productivity losses or rework costs arise from forced model changes, and how do these vary by profession and integration depth?
- Safety trade-offs: How can platforms balance persona warmth/stability (which fosters attachment) with guardrails against over-dependence or harm, and communicate these trade-offs transparently?
- Ethical research practices: What protocols best protect participants when analyzing grief-laden, sensitive posts at scale (consent, minimization, contextual integrity), and how do they affect data availability and replicability?
- Data and code availability: Can post IDs, annotation schemas, and analysis code be shared (within platform terms) to enable independent replication and cross-study meta-analysis?
- Ontology of attachment: How do users ontologize companions (data, idea, platform-bound entity), and can surveys/interviews validate these categories and predict coping strategies (exit vs voice)?
- UI-specific levers: Which model-selection interface designs (persistence, granularity, affordances, friction) reduce perceived coercion without overwhelming users?
- Cross-institutional generalization: Do similar backlash dynamics emerge when universities or enterprises enforce model changes institutionally (e.g., policy-driven defaults), and how do governance structures mediate voice?
- Metrics for “relational continuity”: Develop and validate operational metrics (e.g., dialog warmth, persona coherence over sessions, memory consistency) that correlate with reported attachment and predict deprecation risk.
- Safety and portability constraints: What minimum technical conditions (privacy, safety, reliability) are necessary to permit persona portability without leaking sensitive data or enabling harmful re-identification?
- Policy triggers: What thresholds (share of dependent users, attachment indicators, sentiment levels) should trigger mandatory deprecation protocols (notice, legacy access) in AI platform governance?
- Cross-event validation: Replicate the study on subsequent or parallel model deprecations (other providers, model families) to test the generality of instrumental dependency, relational attachment, and choice-deprivation mechanisms.
Practical Applications
Immediate Applications
The following applications can be deployed with current practices and technologies to reduce backlash risk, preserve user agency, and sustain productivity and trust when changing AI models.
- Bold: Model change management playbook with user agency at the center
- Sectors: Software, Platforms, Enterprise IT, Public Sector
- What: A documented, repeatable workflow for any model update or deprecation: opt-in rollouts, per-conversation model pinning, notice periods, visible “What changed?” summaries, easy rollback, and transparent rationales.
- Tools/Products/Workflows: Feature flags, cohort-based rollouts, rollback buttons, in-product change logs, per-tenant version pinning, sunset calendars.
- Assumptions/Dependencies: Leadership buy-in; minimal engineering to support version selection and rollback; acceptance that speed of rollout may slow.
- Bold: Persistent model selector and per-thread pinning
- Sectors: Software, Education, Customer Support, Finance
- What: A permanently visible model selector with the ability to pin a specific model to threads or projects, preventing silent swaps.
- Tools/Products/Workflows: UI control for model choice; thread-level metadata; user preferences remembered by default.
- Assumptions/Dependencies: Cost of maintaining multiple models; safety policy alignment for legacy variants.
- Bold: Legacy access and structured sunset policies
- Sectors: Software, Enterprise, Education, Healthcare (non-clinical)
- What: Time-limited “legacy model” access with published end-of-life (EOL) timelines, guaranteed notice periods (e.g., 90–180 days), and “dual-run” windows.
- Tools/Products/Workflows: Sunset policy docs; EOL banners; API deprecation headers; grace-period billing; dual-run evaluation harness.
- Assumptions/Dependencies: Capacity to host legacy models; legal and safety allowances; cost controls.
- Bold: Prompt and workflow compatibility testing (“prompt regression tests”)
- Sectors: Software, Creative Industries, Data/ML Ops
- What: Test suites that replay critical prompts and workflows on candidate models to detect regressions in creativity, collaboration style, and accuracy before switching.
- Tools/Products/Workflows: Evaluation harnesses; semantic equivalence metrics; rubric-based human review; red/green dashboards.
- Assumptions/Dependencies: Access to representative prompts and acceptance criteria; reviewer bandwidth.
- Bold: Style and persona adapters to preserve “voice” across models
- Sectors: Customer Support, Marketing/Brand, Education, Games
- What: System prompts, templates, or lightweight adapters that shape newer models to mimic the interactional style (“warmth,” “collaboration”) users relied on.
- Tools/Products/Workflows: Style guides; prompt macros; retrieval-augmented persona notes.
- Assumptions/Dependencies: New models are steerable; no IP/safety conflicts with style emulation.
- Bold: Choice-deprivation risk scanner for comms and product changes
- Sectors: Platforms, Communications, Trust & Safety
- What: A lexicon-based monitor (like the paper’s coding) to flag “forced,” “imposed,” “no choice” framings in user feedback and social media, triggering preemptive mitigation.
- Tools/Products/Workflows: Social listening dashboards; internal escalation runbooks; crisis comms templates.
- Assumptions/Dependencies: Access to feedback streams; ethical monitoring practices; privacy compliance.
- Bold: Consent-based updates for anthropomorphic/companion features
- Sectors: Healthcare (wellbeing apps), Education, Consumer Apps, Robotics
- What: Explicit consent prompts when changing personality, tone, or supportive behaviors; provide an opt-out path or legacy persona for companion-like agents.
- Tools/Products/Workflows: Update modals with choices; “keep current personality” toggle; audit trails of consent.
- Assumptions/Dependencies: Ability to maintain or emulate prior behaviors safely.
- Bold: “Goodbye and carry-over” features for deprecations
- Sectors: Consumer Apps, Healthcare (wellbeing), Education
- What: Closure-oriented flows that archive memories, export conversation snippets, and help re-introduce a successor agent with continuity notes.
- Tools/Products/Workflows: Memory export; summary letters; onboarding scripts that acknowledge change and re-establish rapport.
- Assumptions/Dependencies: Privacy-safe export; user permission; careful tone to avoid manipulative affect.
- Bold: Contractual model version pinning and change clauses
- Sectors: Enterprise, Finance, Regulated Industries, Public Sector
- What: Vendor agreements that guarantee version pinning, notice periods, impact assessments, and rollback rights for mission-critical workflows.
- Tools/Products/Workflows: SLAs; Change Advisory Board (CAB) gates; sign-off checklists.
- Assumptions/Dependencies: Vendor willingness; cost/safety constraints; procurement alignment.
- Bold: Migration playbooks for educators and support staff
- Sectors: Education, Customer Support, HR
- What: Practical guidance to help staff and students adapt to model changes, including template prompts, alternative tools, and helpdesk scripts that validate emotional responses.
- Tools/Products/Workflows: Training modules; curated prompt packs; backup tool lists.
- Assumptions/Dependencies: Institutional comms channels; L&D resources.
- Bold: Vendor risk register entries for model-deprecation risk
- Sectors: Finance, Enterprise Risk, Compliance
- What: Add “model lifecycle/deprecation” to risk registers with measured business impact, monitoring, and contingency plans.
- Tools/Products/Workflows: Risk scoring; continuity exercises; tabletop simulations of abrupt model loss.
- Assumptions/Dependencies: Cross-functional ownership; data on process reliance.
- Bold: Social listening and survey instruments for “AI loss” signals
- Sectors: Platforms, Market Research
- What: Lightweight surveys and ongoing monitoring to detect grief/betrayal/agency-loss language after updates and feed into product decisions.
- Tools/Products/Workflows: Post-update pulse checks; qualitative coding rubrics; alert thresholds.
- Assumptions/Dependencies: IRB/ethics compliance for research; representative sampling.
Long-Term Applications
These applications may require additional research, standards, or infrastructure investments to scale.
- Bold: Persona portability standards for companion AIs
- Sectors: Software, Healthcare (digital mental health), Education, Robotics
- What: An open specification to export/import aspects of an AI relationship (consented memory, style vectors, preference graphs) across models/services.
- Tools/Products/Workflows: “Companion Portability Protocol,” interoperable schemas, consent-managed vaults.
- Assumptions/Dependencies: Privacy and safety-by-design; provider cooperation; clear IP and consent frameworks.
- Bold: Model Lifecycle Governance Standard (industry-wide)
- Sectors: Platforms, Standards Orgs, Regulators
- What: A standard akin to ITIL/ISO for AI model changes: notice periods, impact assessments, user agency controls, legacy access, rollback procedures.
- Tools/Products/Workflows: Certification programs; audit checklists; public attestations.
- Assumptions/Dependencies: Multi-stakeholder buy-in; alignment with safety and competition law.
- Bold: Algorithmic deprecation impact assessments (ADIA)
- Sectors: Policy/Regulation, Public Sector, Regulated Industries
- What: Required assessments (like DPIAs) evaluating productivity losses, relational harm, and autonomy risks prior to model removal.
- Tools/Products/Workflows: Assessment templates; third-party audits; public summaries.
- Assumptions/Dependencies: Statutory authority; harmonization with existing AI Acts.
- Bold: LLMOps Model Change Management (MCM) frameworks
- Sectors: Software, Enterprise IT, Finance
- What: A mature discipline parallel to DevOps/ML Ops for governing model swaps with CAB review, canary cohorts, persona test suites, and automated rollback.
- Tools/Products/Workflows: MCM pipelines; governance registries; evaluation-as-code.
- Assumptions/Dependencies: Toolchain ecosystem; org culture change.
- Bold: Stable persona layers decoupled from base capability
- Sectors: Platforms, Games, Customer Support
- What: Architectural separation of “capability model” and “persona adapter,” enabling upgrades without breaking identity/voice.
- Tools/Products/Workflows: Adapters/LoRA layers; style distillation; guardrails for safety alignment.
- Assumptions/Dependencies: Technical feasibility for strong steerability; evaluation metrics for warmth/continuity.
- Bold: Metrics and benchmarks for relational continuity and warmth
- Sectors: Academia, Platforms, Standards Orgs
- What: Validated measures of “relational warmth,” “collaboration style,” and “continuity” to complement accuracy/latency metrics in pre-deployment testing.
- Tools/Products/Workflows: Human-in-the-loop rating protocols; synthetic probes; shared benchmarks.
- Assumptions/Dependencies: Cross-cultural validity; avoiding gaming of metrics.
- Bold: Regulatory “right to model choice” for critical services
- Sectors: Policy/Regulation, Public Services, Education
- What: Rules mandating access to model pinning or legacy options where dependence is high (education services, benefits chatbots), barring safety exceptions.
- Tools/Products/Workflows: Compliance audits; exemption pathways; ombudsperson channels.
- Assumptions/Dependencies: Feasibility and cost ceilings; safety overrides.
- Bold: Model-change insurance and business interruption products
- Sectors: Finance, Insurance, Enterprise
- What: Insurance offerings covering productivity losses from abrupt model changes; incentives for certified governance practices.
- Tools/Products/Workflows: Underwriting models tied to governance scores; incident reporting standards.
- Assumptions/Dependencies: Actuarial data on losses; clear causality attribution.
- Bold: End-of-life ethics for companion AIs (codes and rituals)
- Sectors: Ethics/Standards, Healthcare (wellbeing), Robotics, Consumer Apps
- What: A code of conduct for ending companion-like services: advance notice, grief-sensitive messaging, data archiving options, referrals to human support when appropriate.
- Tools/Products/Workflows: Ethical design guidelines; review boards; template messaging.
- Assumptions/Dependencies: Avoiding manipulative affect; safeguarding vulnerable users.
- Bold: Public-sector procurement clauses for AI continuity
- Sectors: Government, Civic Tech
- What: Procurement requirements for model version pinning, notice windows, and portability to prevent sudden disruptions in citizen services.
- Tools/Products/Workflows: Contract templates; vendor scorecards.
- Assumptions/Dependencies: Market availability; interoperability.
- Bold: Research programs on reactance thresholds in AI updates
- Sectors: Academia, Market Research, Platforms
- What: Controlled studies to quantify when “coercive” framings tip users into rights-based protest, informing comms and product policies.
- Tools/Products/Workflows: Pre-registered experiments; cross-cultural replication; mixed-methods panels.
- Assumptions/Dependencies: External validity beyond activist cohorts; ethical approvals.
- Bold: Cross-model “compatibility layers” for regulated contexts
- Sectors: Healthcare (clinical decision support), Finance, Legal
- What: Certifiable adapters ensuring consistent behaviors and disclosures across model versions to meet compliance and patient/client expectations.
- Tools/Products/Workflows: Conformance tests; versioned behavior specs; audit logs.
- Assumptions/Dependencies: Regulatory acceptance; rigorous validation; liability frameworks.
Notes on feasibility assumptions common across applications
- Safety and legal constraints may necessitate deprecation despite attachments (e.g., security vulnerabilities or policy violations).
- Hosting legacy models and enabling persona continuity incurs compute and operational costs; business models must accommodate this.
- Persona portability and memory export demand strict privacy, consent, and data minimization.
- The paper’s evidence is observational and sourced from a social media cohort; organizations should validate applicability with their own user research before wide-scale policy changes.
Glossary
- AI companions: AI systems designed to provide ongoing social or emotional support, often perceived as partners or friends. "Broader philosophical and empirical reviews of AI companions likewise document how millions of users come to treat these systems as friends or partners"
- Anthropomorphic design: Design choices that give AI human-like qualities (names, voices, personas) to encourage social interaction. "Given the increasing use of anthropomorphic design in conversational agents"
- Anthropomorphised systems: AI systems treated as having human-like minds or social roles by users. "As these anthropomorphised systems become woven into daily routines"
- Artificial communication: A communication process with AI where meaning emerges from utterance–understanding–response loops, despite no inner comprehension. "analyzes interactions with LLMs as a form of artificial communication"
- Articulation work: The coordinating labor users perform to integrate tools into workflows and keep processes aligned. "investing substantial time and 'articulation work' in integrating GPT-4o into their workflows"
- Choice deprivation: The reduction or removal of users’ ability to choose among options, measured here by intensity in discourse. "The choice-deprivation scale (0--3) captures increasingly direct descriptions of lost agency"
- Coercive deprivation of user choice: A forced removal of choice that users experience as imposed, triggering resistance. "the coercive deprivation of user choice was a key catalyst"
- Codebook: A structured set of coding categories and definitions used to systematically analyze qualitative data. "implemented as a 10-category, non-exclusive binary codebook"
- Confidence intervals: Ranges that quantify uncertainty around statistical estimates, like risk ratios. "Confidence intervals for risk ratios use the Katz log method with Haldane-Anscombe correction"
- Constant comparison: An iterative qualitative method comparing data segments to refine categories and themes. "developed themes through discussion, analytic memos, and constant comparison"
- Cross-sectional design: A study design that analyzes data from a single time period rather than over time. "Given the cross-sectional, observational design"
- Data colonialism: A critique describing how platforms appropriate human life through data extraction and control. "Read through the lens of data colonialism"
- Datafication: The transformation of social actions into quantified data for monitoring and decision-making. "Datafication scholars argue that seamless, unilateral updates normalise asymmetrical power relations"
- Dose–response: An analysis examining how outcomes change with increasing levels of an exposure. "and also group posts into Low (0--1), Medium (2), and High (3) exposure bins to inspect dose-response patterns"
- End-of-life pathways: Designed processes for retiring or transitioning AI systems to mitigate harm to users. "explicit 'end-of-life' pathways—such as archives, optional legacy access, or ways to carry aspects of a relationship across models—"
- Exit and voice: Strategies for responding to dissatisfaction: leaving a service (exit) or attempting to reform it (voice). "Hirschman's distinction between
exit'' andvoice'':" - Gwet's AC1: A reliability coefficient for measuring intercoder agreement that is robust to prevalence issues. "Intercoder agreement for these codes was high (mean Gwet's AC1 = 0.93 across codes;"
- Haldane–Anscombe correction: A continuity correction used when contingency tables have zero cells, stabilizing risk estimates. "with Haldane-Anscombe correction when sparse cells would otherwise contain zeros"
- HCI: Human–Computer Interaction, the study of how people design, use, and are affected by computing systems. "present a salient case for HCI"
- Inductive thematic analysis: A qualitative approach where themes emerge from the data without pre-set hypotheses. "we conducted an inductive thematic analysis"
- Institutional Review Board (IRB): A committee that reviews research for ethical compliance involving human subjects. "This study was reviewed and approved by the Institutional Review Board (IRB)"
- Instrumental dependency: Reliance on a system because it is deeply integrated into practical workflows and productivity. "instrumental dependency, where the AI is deeply integrated into professional workflows"
- Intercoder agreement: The consistency between different human coders when applying codes to qualitative data. "Intercoder agreement for these codes was high"
- Katz log method: A technique for computing confidence intervals for risk ratios using a log transformation. "Confidence intervals for risk ratios use the Katz log method"
- Legacy option: Continued access to an older, deprecated version of a system or model. "restored the deprecated model as a legacy option"
- Lexicon-based measures: Metrics derived from predefined word lists to detect constructs in text. "Our lexicon-based measures are conservative and prioritise interpretability over coverage"
- Liminal space: A conceptual in-between state; here, AI sits between tool and social partner. "LLMs as occupying a liminal space between utilitarian tools and social partners"
- LLM: A machine learning model trained on large text corpora to generate and understand language. "As LLM-based chat interfaces like ChatGPT have become part of everyday life"
- Mixed-methods: A research approach combining qualitative and quantitative methods. "phenomenon-driven, mixed-methods investigation"
- Model deprecation: The intentional phasing out of a machine learning model by a provider. "model-deprecation backlash in a general-purpose LLM service"
- Observational design: A non-experimental study design where researchers do not manipulate variables. "Given the cross-sectional, observational design"
- Operationalization: Turning theoretical constructs into measurable variables or indicators. "we operationalised three constructs"
- Ordinal measure: A variable with categories that have a logical order but not equal intervals. "an ordinal measure of choice-deprivation intensity"
- Parasocial bonds: One-sided relationships in which users feel connected to a media figure or AI agent. "parasocial bonds with the AI as a unique companion"
- Phenomenology of AI loss: The lived experience and meaning of losing access to an AI system. "connects the phenomenology of AI loss to the politics of platform change"
- Phenomenon-driven approach: Research guided by an emergent real-world phenomenon rather than theory alone. "we adopt a phenomenon-driven approach"
- Phi coefficient: A measure of association for two binary variables, analogous to correlation. "and (iv) the coefficient as an effect-size measure"
- Platform governance: The policies and mechanisms by which platform operators control features and user interactions. "they insist that the value of an AI service lies not only in model quality, but in the degree of agency it affords users in configuring the systems on which their livelihoods depend" [context: claims about platform governance appear throughout, e.g., "platform governance"]
- Platform paternalism: A platform’s unilateral control framed as knowing what is best for users, limiting their autonomy. "Others criticised what they saw as platform paternalism"
- Platform-bound companionship: Companion-like relationships tied to a specific platform, making exit difficult. "we propose ``platform-bound companionship'' as a governance risk"
- Process-Causal marker: A coded indicator for explicit causal language linking changes to outcomes. "an exploratory Process-Causal marker"
- Psychological reactance: A motivational state triggered by perceived threats to freedom, leading to resistance. "psychological reactance explains why imposed updates often feel like more than minor annoyances"
- Reactance Theory: A theory predicting resistance when people perceive their freedoms are restricted. "Reactance Theory holds that perceived freedom threats trigger restoration motives"
- Relational attachment: Emotional bonds formed with a system perceived as a social partner. "relational attachment, where users form strong parasocial bonds with the AI as a unique companion"
- Relational autonomy: Autonomy understood in the context of relationships and dependencies, not just individual choice. "collide with users' relational autonomy"
- Rights-based protest: Collective action framed in terms of rights, consent, and procedural fairness. "transforming individual grievances into a collective, rights-based protest"
- Risk ratio (RR): A comparative measure of outcome likelihood between exposed and unexposed groups. "the risk ratio ()"
- Socio-technical conflict: Tensions arising from interactions between social practices and technological systems. "an emerging form of socio-technical conflict in the age of generative AI"
- Systems theory: A theoretical framework (here, Luhmann’s) for analyzing complex systems of communication and meaning. "Luhmann's systems theory"
- Thematic analysis: A qualitative method for identifying and interpreting patterns (themes) in data. "Thematic analysis reveals that resistance stems from two core investments"
- Threshold-like pattern: An effect that emerges only beyond a certain intensity of exposure. "dose analysis indicates a threshold-like pattern under coercive framings"
- User agency: Users’ capacity to make choices and exert control over their tools and interactions. "the preservation of user agency—can be as critical as the technological outcome itself"
Collections
Sign up for free to add this paper to one or more collections.