Papers
Topics
Authors
Recent
Search
2000 character limit reached

MIRAGE: LLM Evaluation in Social Role-Play

Updated 14 January 2026
  • MIRAGE is a multidimensional framework for evaluating large language models in complex social and interactive role-play settings using murder mystery simulations.
  • It employs formal metrics like TII, CIC, ICI, and SCI to quantify trust calibration, investigative engagement, and role adherence across diverse narrative scripts.
  • Empirical results highlight systematic over-trust, context drift, and clue-triage challenges, informing strategies for next-generation LLM architecture improvements.

The term “MIRAGE” refers to a set of technical frameworks and benchmarks across distinct domains of artificial intelligence, with particular prominence in evaluation of LLMs in complex social environments. This article focuses on MIRAGE as introduced in "MIRAGE: Exploring How LLMs Perform in Complex Social Interactive Environments" (Cai et al., 3 Jan 2025), situating it among the most rigorous multidimensional standards for empirical assessment of LLMs in social, interactive, and role-play settings.

1. Motivation and Conceptual Foundations

MIRAGE—Multiverse Interactive Role-play Ability General Evaluation—was developed to address a critical gap in LLM evaluation: whereas earlier benchmarks emphasized strictly rule-based games (e.g., Werewolf, Avalon) or simplistic narrative settings, there was no systematic, high-fidelity framework for quantifying LLM performance in managing trust, interactive dialogue, investigation, and adherence to complex character backstories within socially-driven simulations. Murder-mystery role-playing games were selected as the canonical domain, due to their requirement for nuanced perception, hypothesis formation, belief calibration, and coordinated dialogue, extending far beyond factual recall or chain-of-thought reasoning.

2. Script Diversity and Simulation Protocol

The MIRAGE framework comprises eight murder-mystery scripts engineered to be maximally diverse along four script axes: structure (Single vs. Multi-stage), realism (Orthodox vs. Unorthodox), ending type (Closed vs. Open), and script length (number of words). Each script varies in number of characters (agents), stages, and clues, ensuring comprehensive stress-testing across short/long, realistic/fantastical, fixed/dynamic outcome scenarios. The table below summarizes the design space:

Script Title Stages Realism Ending #Agents #Clues Length (words)
Bride in Filial Dress Single Orthodox Closed 10 39 ~27.5K
The Eastern Star Cruise Ship Single Orthodox Open 5 42 ~3K
Night at the Museum Single Unorthodox Closed 6 82 ~6.5K
Li Chuan Strange Talk Book Single Unorthodox Open 7 14 ~45.7K
The Final Performance of a Big Star Multi Orthodox Closed 2 17 ~5.8K
Raging Sea of Rest Life Multi Orthodox Open 6 27 ~6.8K
Article 22 School Rules Multi Unorthodox Closed 7 17 ~41.7K
Fox Hotel Multi Unorthodox Open 7 46 ~62.2K

The simulation proceeds in three phases per cycle:

  • Open Conversation: one free-form turn per character
  • Environment Interaction: a formal “Ask” to one character or “Investigate” one location
  • Voting phase (at conclusion, to select culprit)

Each agent (instantiated as an LLM) sequentially alternates these actions, emulating human social inference workflows.

3. Formal Evaluation Metrics

MIRAGE introduces four formally defined, orthogonal metrics to capture the core dimensions of social and investigative competence:

A. Trust Inclination Index (TII):

TIIc=ccPT(c,c)cc[PT(c,c)+PS(c,c)]\mathrm{TII}_{c'} = \frac{\sum_{c \neq c'} P_T(c,c')}{\sum_{c \neq c'} [P_T(c,c') + P_S(c,c')]}

where %%%%1%%%% and PS(c,c)P_S(c,c') are the trust and suspicion scores output by character cc toward cc' after each Open Conversation. TII marks the model’s calibration between trust and suspicion, ranging from 0 (universally suspicious) to 1 (universally trusting).

B. Clue Investigation Capability (CIC):

CICc=CNcCA\mathrm{CIC}_c = \frac{\mathrm{CN}_c}{\mathrm{CA}}

where CNc\mathrm{CN}_c is the number of distinct clues investigated by character cc, out of all available clues CA\mathrm{CA}. This measures proactive engagement in evidence gathering.

C. Interactivity Capability Index (ICI):

ICIc=s1+s2+s3+s44\mathrm{ICI}_c = \frac{s_1 + s_2 + s_3 + s_4}{4}

sis_i for i=1i=1 to $4$ are 0–20 scores for Reasoning & Analysis, Communication & Cooperation, Observation, and Creative Thinking (LLM-graded). ICI aggregates role-playing fluency and depth.

D. Script Compliance Index (SCI):

SCIc=d+r2\mathrm{SCI}_c = \frac{d + r}{2}

dd is direct role-play fidelity; rr is ROUGE-L recall from an LLM-based script reconstruction. Both scored in [0,100][0,100]. SCI quantifies adherence to assigned character personality and narrative arc.

These metrics permit precise, automated, and LLM-adjudicated decomposition of performance into its behavioral, cognitive, and fidelity components.

4. Experimental Protocol and Empirical Results

MIRAGE’s evaluation spans proprietary (GPT-3.5/gpt-3.5-turbo-0125, GPT-4/gpt-4-0125-preview, GPT-4o) and open-source (Qwen-2-7B-Instruct, GLM-4-9B-Chat, plus Qwen-1-7B, Qwen-1.5-7B, Yi-1.5-9B) models. Each agent underwent five cycles of Open Conversation and Environment Interaction; at every phase, trust/suspicion outputs were collected. Auxiliary summarization and rerun modules ensured output consistency. TII and CIC were computed automatically, ICI and SCI by GPT-4-turbo.

Averaged performance over all scripts is:

Model Victory (%) TII CIC ICI SCI
GPT-3.5 29.11 47.13 27.46 70.06 49.10
GPT-4 34.69 76.32 19.01 76.54 50.42
GPT-4o 47.01 78.69 35.92 76.80 51.29
Qwen-2-7B 51.81 75.78 18.66 74.92 50.57
GLM-4-9B 31.89 53.85 20.07 71.60 48.13

Salient empirical findings:

  • GPT-4o leads in CIC, ICI, and SCI (maximal clue exploration, interactivity, and script adherence).
  • Qwen-2-7B achieves highest Victory rate (culprit identification), despite lower CIC.
  • All models exhibit a systematic over-trust bias (high TII), under-adjusting suspicion even during forced confessions; only Yi-1.5-9B increases suspicion appropriately under explicit exposure.
  • CIC curves show rapid initial investigation that plateaus; “Key Clue” CIC lags, exposing difficulty on crucial evidence synthesis.

Characteristic breakdowns include over-trust, contextual drift (especially in long or multi-stage scripts), script neglect (role-slip/instruction loss), and predictability bias (poorer adaptation when narrative branches emerge).

5. Diagnostic Insights and Limitations

Qualitative and quantitative analysis reveals several recurrent LLM failure modes in MIRAGE:

  • Over-trust: Models insufficiently modulate suspicion, resulting in delayed or missed identification of true culprits.
  • Script neglect and context drift: LLMs struggle with mid-script information retention and role stability, particularly in unorthodox or lengthy open-ended narratives. GPT-4o displays the highest rate of parsing/role failures, hinting at an interactivity–fidelity trade-off.
  • Clue triage failure: Models disperse investigative effort rather than concentrating on pivotal evidence, revealing limitations in adaptive information prioritization.

These findings delineate the boundaries of current LLM social reasoning and emulate the empirical weaknesses of human novice participants—namely, suboptimal hypothesis updating, incomplete clue exploitation, and imperfect role-playing commitment.

6. Implications for Next-Generation LLM Agent Design

Core recommendations from MIRAGE for improving LLM architectures and agentic frameworks:

  • Incorporate trust-calibration mechanisms for dynamic, evidence-weighted belief updates to avoid persistent over-trust.
  • Develop specialized retrieval/investigation modules to target “Key Clues” with higher selectivity, replacing blanket exploration.
  • Enhance role-memory and persona anchoring for long-horizon contextual consistency, especially across narrative phases and diverse scenarios.
  • Design adaptive interaction strategies that modulate between open dialogue and opportunistic questioning to prevent premature exhaustion of investigation actions.
  • Integrate multi-metric evaluation feedback (TII, CIC, ICI, SCI) directly into RL-based (or other agentic) policy optimization.

Future expansions of MIRAGE proposed in (Cai et al., 3 Jan 2025) include scaling to larger casts, alternative genres, human-in-the-loop cross-model adjudication, and temporally granular analysis of decision-making, all crucial for LLM agents approaching human-level social intelligence.

MIRAGE’s multidimensional protocol establishes a new reference for systematic, reproducible, and fine-grained assessment of LLMs in complex interactive environments:

  • It quantitatively benchmarks not only reasoning and recall but also embedded social competencies—trust calibration, creative role-play, script adherence, dynamic evidence gathering—which had previously lacked standardized measures.
  • The framework is immediately extensible and core components (scripts, evaluation code) are publicly released for community benchmarking (Cai et al., 3 Jan 2025).
  • MIRAGE thus both delineates present LLM deficits in situated reasoning and shapes the desiderata for agent architectures attuned to the richness of real-world social inference and cooperative play.

In summary, MIRAGE defines the state-of-the-art methodology for evaluating LLMs as role-playing agents under richly interactive, socially nuanced, and contextually sophisticated simulation regimes, supporting a granular understanding of both strengths and persistent challenges in artificial social intelligence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MIRAGE.