Papers
Topics
Authors
Recent
Search
2000 character limit reached

LLMs are Introvert

Published 8 Jul 2025 in cs.AI and cs.SI | (2507.05638v1)

Abstract: The exponential growth of social media and generative AI has transformed information dissemination, fostering connectivity but also accelerating the spread of misinformation. Understanding information propagation dynamics and developing effective control strategies is essential to mitigate harmful content. Traditional models, such as SIR, provide basic insights but inadequately capture the complexities of online interactions. Advanced methods, including attention mechanisms and graph neural networks, enhance accuracy but typically overlook user psychology and behavioral dynamics. LLMs, with their human-like reasoning, offer new potential for simulating psychological aspects of information spread. We introduce an LLM-based simulation environment capturing agents' evolving attitudes, emotions, and responses. Initial experiments, however, revealed significant gaps between LLM-generated behaviors and authentic human dynamics, especially in stance detection and psychological realism. A detailed evaluation through Social Information Processing Theory identified major discrepancies in goal-setting and feedback evaluation, stemming from the lack of emotional processing in standard LLM training. To address these issues, we propose the Social Information Processing-based Chain of Thought (SIP-CoT) mechanism enhanced by emotion-guided memory. This method improves the interpretation of social cues, personalization of goals, and evaluation of feedback. Experimental results confirm that SIP-CoT-enhanced LLM agents more effectively process social information, demonstrating behaviors, attitudes, and emotions closer to real human interactions. In summary, this research highlights critical limitations in current LLM-based propagation simulations and demonstrates how integrating SIP-CoT and emotional memory significantly enhances the social intelligence and realism of LLM agents.

Summary

  • The paper introduces a SIP-enhanced framework that integrates social information processing theory into LLMs to bridge gaps in human-like social cognition.
  • It demonstrates improved simulation fidelity with enhanced stance detection and emotional alignment compared to baseline LLM agents.
  • Statistical evaluations across 13 SIP aspects reveal closer alignment with human judgments, reducing bias and ensuring diverse response trajectories.

Introduction

The paper "LLMs are Introvert" (2507.05638) introduces a framework aimed at enhancing the social cognitive capabilities of LLMs by integrating Social Information Processing (SIP) theory. This work explores significant gaps in the ability of current LLMs to simulate nuanced human-like social interactions and proposes a novel cognitive architecture incorporating SIP stages and emotion-guided memory to address these limitations.

Framework Overview

The framework enhances LLM-driven social simulations by embedding SIP theory stages (Figure 1). Traditional agent-based models employing LLMs lack the necessary granularity to accurately simulate the multifaceted nature of human social cognition, resulting in significant discrepancies when compared to human behavior in stance detection and emotional alignment. The SIP-enhanced cognitive architecture includes long-term and short-term social memories which store social knowledge and emotional responses, enabling more context-sensitive decision-making. Figure 1

Figure 1: Overview of the SIP-enhanced social-agent framework, integrating SIP stages and decision-making modules within an LLM agent.

Baseline Evaluation

The initial evaluation of baseline LLM agents reveals systematic differences between human judgements and LLM-generated responses across five stages of social cognition: encoding, interpretation, goal classification, response, and evaluation. These discrepancies are highlighted by the SIP testing paradigm, which shows that LLMs struggle particularly with the flexible interpretation of ambiguous social cues and the evaluation of responses' social implications. Figure 2

Figure 2: SIP-testing benchmark uncovers systematic differences between human judgements and baseline LLM agents.

Performance of SIP-Enhanced LLM Agents

The integration of SIP-based thought mechanisms improves both macro- and micro-level simulation fidelity. SIP-enhanced agents demonstrate reduced bias (Δbias\Delta_{\text{bias}}) and increased alignment in stance, content, and emotional expressions compared to baseline models (Figure 3). Their improved performance in social simulations suggests greater fidelity in capturing human-like social dynamics. Figure 3

Figure 3: Macro- and micro-level evaluation of SIP-enhanced LLM agents showing improved alignment and diversity in opinion trajectories.

Statistical Insights into SIP-Enhanced Agents

Radar charts comparing human, baseline, and SIP-enhanced agents across 13 SIP aspects show that the enhanced agents achieve distributions more inline with human responses, as evidenced by closer alignment in mean responses, variability, asymmetry, and tails in distribution (Figure 4). These results substantiate the hypothesis that SIP-based cognitive enhancements can significantly bridge the gap between artificial and human social cognition. Figure 4

Figure 4: SIP-enhanced cognitive architecture achieves human-like distributional statistics across the 13 SIP items.

Conclusion

The integration of Social Information Processing theory into LLM-based social simulations enhances the ability of LLMs to mimic human social behavior in complex environments. While current LLMs suffer from rigid social-cognitive schemas, the proposed SIP-enhanced architecture offers significant improvements in alignment and diversity of social behavior. However, it is noted that encoding of emotional cues remains an area requiring further refinement. Future work could extend this framework by incorporating multimodal inputs and validating across diverse sociocultural contexts. SIP-enhanced LLMs have the potential to serve as robust platforms for assessing AI-driven policy interventions and compliance within AI governance frameworks.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.