Papers
Topics
Authors
Recent
Search
2000 character limit reached

AI-Mediated Digital Civic Storytelling

Updated 24 January 2026
  • AI-DCS is a framework integrating generative AI into civic narrative creation for authentic representation and democratic engagement.
  • It employs technologies like facial emotion recognition, adaptive language models, and attention tracking to tailor narratives in real time.
  • Participatory co-creation workflows ensure diverse civic voices and critical evaluation of AI-generated cultural heritage and civic discourse.

AI-mediated Digital Civic Storytelling (AI-DCS) denotes the integration of generative artificial intelligence technologies into the development, adaptation, and dissemination of civic and cultural narratives. AI-DCS platforms facilitate participatory co-creation and synthesis of stories that represent diverse civic identities, collective memories, and local perspectives, often aiming to foster perspective-taking, shared understanding, and democratic engagement in politically or culturally charged contexts. Implementations span adaptive narrative delivery, first-person synthesis pipelines, collaborative image generation, and interactive storytelling systems. Foundational works detail platform architectures, user-centered methodologies, evaluation metrics, and design guidelines to address the dual challenges of affective engagement and representation fidelity (Wegemer et al., 30 Jun 2025, He et al., 2024, Pait et al., 2024, Overney et al., 23 Sep 2025).

1. Definitions and Theoretical Foundations

AI-DCS encompasses multiple theoretical dimensions: transportation-Imagery (Green & Brock, 2000), social identity theory (Tajfel & Turner, 1979), parasocial interaction (Reeves & Nass, 1996), participatory design, and narrative persuasion. Core constructs include:

A plausible implication is that combining real-time affect sensing with AI-driven adaptation mechanisms can attenuate identity-protective resistance and build empathy across polarized civic boundaries (Wegemer et al., 30 Jun 2025).

2. System Architectures and Technical Components

Modern AI-DCS platforms employ modular, closed-loop system designs integrating multimodal sensing, generative pipelines, and human-AI dialogue management.

Module Functionality Technologies
Facial Emotion Recognition Segment-level emotion aggregation (Δp_ke, θ_e = 0.30) CNN (TensorFlow/RAF-DB)
Attention Tracking Binary a_t detection, persistent inattention flagging OpenCV Haar cascades
Narrative Adaptation Engine Beat-by-beat GPT-4 language tuning LangChain, Azure TTS/STT
Human-AI Synthesis Pipeline Theme classification, composite story generation GPT-4o-mini, Claude 3.5 Sonnet
User Interface Multimodal storytelling, dialogue supervision WebRTC, timestamped logging

For example, one architecture triggers narrative “emotive amplification” mode whenever live affect or attentional signals fall below empirically determined thresholds, adjusting narrative tone and emotional vividness via LLM prompt conditioning (Wegemer et al., 30 Jun 2025). Human-AI narrative synthesis contexts apply multiple LLM passes for scene/theme extraction, draft review, and citation validation, typically blending four AI generations with three rounds of human editorial oversight (Overney et al., 23 Sep 2025).

3. Narrative Mechanisms and Adaptation Algorithms

AI-DCS operationalizes transformation, identification, and interaction using affect-adaptive and personalization strategies:

  • Transportation: Real-time sensing identifies mismatches between expected and actual user emotion (Δp_k{e*} < θ_e), triggering LLM prompts that heighten sensory detail, emotional intensity, or narrative perspective (Wegemer et al., 30 Jun 2025).
  • Identification: Demographic profiling in onboarding dialogs guides LLM narrative adaptation and synthetic voice generation to foster character alignment and user identification.
  • Composite Synthesis: Thematic mapping and scene extraction pipeline (f_prep:X→C, f_syn:{x_j1,…,x_jr; θ_k}) aggregate constituent quotes into first-person composite stories anchored with inline stakeholder citations (Overney et al., 23 Sep 2025).

Example pseudocode:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
\begin{algorithmic}[1]
\For{%%%%0%%%% to %%%%1%%%%}
    \State %%%%2%%%% Sense\_Emotion\_Attention(segment %%%%3%%%%)
    \If{%%%%4%%%% \textbf{and} %%%%5%%%%}
        \State %%%%6%%%% “baseline”
    \Else
        \State %%%%7%%%% “emotive\_amplify”
    \EndIf
    \State %%%%8%%%% GPT4\_Prompt(segment %%%%9%%%%, mode)
    \State PlayAudio(narration)
    \If{persistent\_mismatch(%%%%10%%%%) \textbf{or} persistent\_inattention(%%%%11%%%%)}
        \State Engage\_User\_Dialogue()  \Comment{Supervisory GPT-4}
    \EndIf
\EndFor
\end{algorithmic}

In cultural heritage contexts, generative diffusion models (Stable Diffusion v1.3.2 + extension) support objective depiction, emotional recreation, or exploratory transformation of civic sites, with prompt strategies ranging from “subject + setting + style + mood” to abstract, metaphorical imagery (He et al., 2024). Prompt iteration count (k), narrative coherence (NC), and user satisfaction (US) are proposed as tractable evaluation metrics.

4. Participatory Workflows and Interaction Frameworks

Empirical AI-DCS deployments employ iterative human-AI feedback loops, scaffolded by prompt engineering templates, multimodal artifact creation, and performance-based reflection. Notable features include:

  • Workshop-Based Co-Creation: Participants (adults or children) engage with generative AI models to construct, critique, and perform civic narratives—via image or text generation, puppet making, and staged debate (Pait et al., 2024).
  • Prompt Engineering and Iteration: Spell-template (“Take [base image], remove [feature], add [feature], style [adjective]”) or structured feedback loop (“Prompt → Output → Critique → Refinement”) are standard, with community members refining prompts to achieve higher narrative fidelity and personal resonance (He et al., 2024, Pait et al., 2024).
  • Hybrid Digital-Physical Artifacts: AI-generated images serve as backdrops or provocations; physical making (puppets, performance) anchors digital outputs in embodied story enactment, facilitating critical engagement and technology literacy (Pait et al., 2024).

A plausible implication is that transparent, participatory workflows enhance civic agency and promote critical evaluation of AI-generated content, especially among non-expert or youth participants (Pait et al., 2024).

5. Evaluation Methodologies and Metrics

Published AI-DCS platforms emphasize mixed-methods evaluation: field deployments, user studies, controlled experiments, telemetry, and narrative coding. Key protocols and metrics include:

  • Affective and Perspective-Taking Scales: Measurement of transportation (Green & Brock, 2000), perspective-taking (Davis, 1983), and affective thermometer scores post-narrative exposure (Wegemer et al., 30 Jun 2025).
  • Engagement Analytics and Outcome Feedback: Session duration, navigation pathways, storycard feedback ratings (Likert scale for Relatability, Understanding, Respect, Trust, Curiosity), and citation exploration rates (Overney et al., 23 Sep 2025).
  • Narrative Coherence and User Satisfaction: Computed as

NC=1T1t=1T1sim(It,It+1),US=1Nu=1Nru\mathrm{NC} = \frac{1}{T-1} \sum_{t=1}^{T-1} \mathrm{sim}(I_t, I_{t+1}),\qquad \mathrm{US} = \frac{1}{N}\sum_{u=1}^{N} r_u

where similarity and rating measures capture sequential consistency and subjective approval (He et al., 2024).

Experimental findings indicate that scene-dominant narratives elicit higher interpersonal respect and trust than opinion-heavy ones (e.g., Respect M=3.72 vs. M=3.29, p=.011), with narrative format influencing relational outcomes rather than stances on policy (Overney et al., 23 Sep 2025).

AI-DCS systems face notable technical and socio-ethical challenges:

  • Representation and Bias: Generative AI frequently misrepresents detailed cultural features, merges objects incorrectly, and introduces Westernized artifacts, impacting narrative fidelity for underrepresented heritage (He et al., 2024).
  • AI Disclosure and Trust: Human-AI role boundaries and transparent authorship disclosure modulate user trust—community-integrated disclosure increases acceptance compared to experimental contexts (Overney et al., 23 Sep 2025).
  • Ethical Risks: Content hallucination, composite persona consent, dialogue privacy, and platform “jail-break” detection demand robust human oversight and audit mechanisms (Wegemer et al., 30 Jun 2025).

Recommended design countermeasures:

  • Fine-tune models with curated local datasets and reinforcement learning from human feedback (RLHF) (He et al., 2024).
  • Provide objective references to prompt user verification and spot-the-error awareness.
  • Scaffold prompt engineering and meta-viewing of how user expressions shape AI outputs.
  • Explicitly constrain AI dialogue agents with ethical filters and opt-out controls (Wegemer et al., 30 Jun 2025).

7. Future Directions and Open Research Questions

Emerging research suggests multiple trajectories for AI-DCS systems:

  • Algorithmic Literacy and Reflection: Integration of meta-analytics to illustrate AI’s influence on narrative outcomes, plus educational modules on affective bias and prompt engineering (Wegemer et al., 30 Jun 2025).
  • Multimodal Fusion and Enhanced Personalization: Advancing fusion across facial, vocal, and linguistic sentiment data to improve emotion classification and narrative adaptation (Wegemer et al., 30 Jun 2025).
  • Participatory Review and Authenticity Assurance: Developing automated citation validation and participatory review mechanisms with original contributors (Overney et al., 23 Sep 2025).
  • Cross-Domain Applicability: Expanding AI-DCS methods to urban planning, budgeting, and broader civic contexts, always foregrounding human agency and inclusive representation.

Open questions include formalizing iterative AI-prompting skill development, quantifying long-term civic engagement impacts, scaling to heterogeneous community groups, and refining qualitative-quantitative evaluation frameworks for narrative quality and learning (Pait et al., 2024, Overney et al., 23 Sep 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to AI-Mediated Digital Civic Storytelling (AI-DCS).