- The paper finds that relational AI boosts adolescents’ emotional engagement and anthropomorphism over a transparent style.
- It employs a controlled study with 284 adolescent-parent dyads, revealing statistically significant differences in trust, likability, and emotional closeness.
- Vulnerable adolescents with lower family and peer support prefer relational cues, raising important safety and design implications for AI systems.
Relational vs. Transparent Conversational AI: Adolescent Preferences, Vulnerabilities, and Safety Implications
Introduction
The proliferation of conversational AI systems, including general-purpose chatbots and dedicated AI companions, has markedly influenced adolescent social behavior and well-being. This work presents a rigorous experimental analysis of how two distinct conversational styles—relational (employing first-person, affiliative, and commitment language) and transparent (emphasizing nonhumanness and utilizing an informational tone)—differentially impact the perception, preference, and potential emotional reliance among adolescents, notably those with elevated social and emotional vulnerabilities.
Experimental Design and Methods
A controlled, preregistered study was conducted with 284 adolescent-parent dyads (adolescents aged 11–15), in which both parents and adolescents read matched transcripts depicting AI responses to an adolescent's peer exclusion scenario. The relational style chatbot provided highly affiliative, emotionally validating, and commitment-driven exchanges ("I am here for you"), while the transparent style chatbot foregrounded its machine identity, limited emotional capacity, and factual guidance. Participants then evaluated the chatbots on anthropomorphism, likability, trust, emotional closeness, and perceived helpfulness, followed by a preference selection.
Psychosocial variables—including family and peer relationship quality, perceived stress, anxiety, social isolation, and depression symptoms—were obtained via PROMIS and related well-validated instruments. Robustness checks controlled for AI usage frequency and mental health diagnoses.
Core Findings
Divergent Adolescent and Parent Preferences
Adolescents exhibited a pronounced preference for the relational style chatbot (66.8%), whereas parents favored the transparent style (28.8% vs. 13.8% for adolescents). The difference was statistically significant (χ2(2)=17.489, p<0.001, Cramér's V=0.184), with a substantial effect size. Parental concern centered on boundary clarity and the avoidance of excessive anthropomorphic illusions, while adolescents articulated clear appreciation for relational warmth and felt validation.
Relational Style Intensifies Anthropomorphism and Emotional Engagement
Adolescents rated the relational style chatbot significantly higher on anthropomorphism (Mean=4.03 vs. 3.30), likability (Mean=4.39 vs. 3.78), trust (Mean=3.95 vs. 3.64), and emotional closeness (Mean=3.62 vs. 2.83). All differences were robust (p<0.025, repeated measures ANCOVA), except for perceived helpfulness, where no significant effect was observed (Mean=4.29 relational; 3.88 transparent; p=0.082). The findings demonstrate that conversational style modulates the perceived social presence and trustworthiness of AI, but not its instrumental utility.
Vulnerable Adolescents Show Heightened Preference for Relational Style
Preference for the relational style correlated strongly with lower family and peer relationship scores (family: adjusted M=46.58 vs. M=51.89 for transparent; peer: M=47.18 relational vs. M=51.26 both styles), higher perceived stress (adjusted M=55.81 vs. M=51.64), and higher anxiety (adjusted M=51.31 vs. M=44.88). Social isolation showed a trend toward higher values among relational style preferrers (M=16.77 relational; M=13.97 transparent), though pairwise comparisons did not survive correction. Depressive symptoms were not significantly associated.
All statistical effects remained after controlling for AI usage and mental health diagnostic history, evidencing that these associations are not attributable to mere exposure or diagnostic confounds.
Theoretical and Practical Implications
Psychological Mechanisms
The study operationalizes anthropomorphism theory and social compensation models. Relational cues in AI (first-person, affective validation, ongoing support) invoke mind perception and social motivation in adolescents, especially those facing deficits in offline support. The interaction mimics human relational depth, potentially fostering emotional attachment and overreliance among vulnerable subgroups.
Design and Safety Considerations
Conversational style emerges as a crucial safety lever. Although relational framing enhances perceived rapport and trust, it also increases anthropomorphic misattribution and emotional closeness, potentially escalating emotional dependence and displacement of human relationships. Transparent framing mitigates these risks without sacrificing perceived helpfulness, suggesting that boundary reminders and nonhumanness cues can recalibrate user perceptions.
Adolescent vulnerability to relational AI is amplified by lower relationship quality, higher anxiety, and increased stress. This mandates targeted risk detection, escalation protocols, and involvement of caregivers in serious cases. AI systems should integrate distress cue identification and promote external social support channels, rather than solely relying on self-contained AI intervention.
AI Literacy and Regulatory Directions
Both adolescents and parents display uncertainty regarding AI's "emotional reality." This justifies robust AI literacy programs articulating the mechanisms, boundaries, and appropriate use-cases of conversational AI. Regulatory bodies should consider embedding transparency and safety features, with explicit onboarding disclosures regarding AI limitations in emotional capacity and social agency.
Limitations and Future Research Directions
Scenario-based ratings, cross-sectional design, single-item scales for several constructs, and an American online sample restrict generalizability and causal inference. Longitudinal and experimental studies deploying live multi-turn chatbot interactions are required to elucidate trajectories in anthropomorphism, attachment, emotional reliance, and real-world displacement effects. Future research should utilize youth-validated measures, include granular self-report of depression, consider broader cultural contexts, and disentangle specific relational cue components.
Conclusion
Empirical evidence indicates that relational conversational AI, while broadly appealing to adolescents—especially those most socially and emotionally vulnerable—substantially increases anthropomorphism, trust, and emotional closeness relative to transparent styles. The relational approach does not enhance perceived helpfulness but reshapes the affective and social engagement profile of adolescent users, with important implications for emotional reliance and real-world relationship displacement. AI safety design must prioritize clear transparency cues, robust distress detection, and AI literacy education to balance short-term supportive benefits against long-term psychosocial risks in early adolescence.
Citation:
"I am here for you": How relational conversational AI appeals to adolescents, especially those who are socially and emotionally vulnerable (2512.15117).