Papers
Topics
Authors
Recent
Search
2000 character limit reached

EgoReAct: Egocentric Video-Driven 3D Human Reaction Generation

Published 28 Dec 2025 in cs.CV and cs.AI | (2512.22808v1)

Abstract: Humans exhibit adaptive, context-sensitive responses to egocentric visual input. However, faithfully modeling such reactions from egocentric video remains challenging due to the dual requirements of strictly causal generation and precise 3D spatial alignment. To tackle this problem, we first construct the Human Reaction Dataset (HRD) to address data scarcity and misalignment by building a spatially aligned egocentric video-reaction dataset, as existing datasets (e.g., ViMo) suffer from significant spatial inconsistency between the egocentric video and reaction motion, e.g., dynamically moving motions are always paired with fixed-camera videos. Leveraging HRD, we present EgoReAct, the first autoregressive framework that generates 3D-aligned human reaction motions from egocentric video streams in real-time. We first compress the reaction motion into a compact yet expressive latent space via a Vector Quantised-Variational AutoEncoder and then train a Generative Pre-trained Transformer for reaction generation from the visual input. EgoReAct incorporates 3D dynamic features, i.e., metric depth, and head dynamics during the generation, which effectively enhance spatial grounding. Extensive experiments demonstrate that EgoReAct achieves remarkably higher realism, spatial consistency, and generation efficiency compared with prior methods, while maintaining strict causality during generation. We will release code, models, and data upon acceptance.

Summary

  • The paper introduces a novel framework that generates photorealistic 3D human reactions from egocentric video input using a diffusion-based generative model.
  • It leverages spatial-temporal feature extraction through vision Transformers to fuse first-person visual cues with temporally coherent motion synthesis.
  • Experiments demonstrate that EgoReAct outperforms existing methods with superior motion realism, enhanced generalization across diverse environments, and improved user preference ratings.

EgoReAct: Egocentric Video-Driven 3D Human Reaction Generation

Problem Formulation and Motivations

EgoReAct addresses the task of generating photorealistic, 3D human full-body motion reactions given purely egocentric video input. While conventional human action/reaction synthesis has primarily focused on exocentric (third-person) data or text-based descriptions, EgoReAct formulates the problem as directly inferring an embodied avatar’s reactive motion in 3D from first-person visual observations. This formulation is motivated by advancing realistic embodied AI, AR/VR character control, human–robot interaction, and cognitive neuroscience by enabling scenarios where observed egocentric perspectives drive human-like responsive body movements in immersive environments.

Methodology

The proposed EgoReAct framework integrates the following core components:

  • Egocentric Video Encoder: Processes sequential visual signals from the agent’s viewpoint. The model bridges spatial-temporal feature extraction with vision Transformers, leveraging state-of-the-art visual backbone models robust to egocentric challenges (e.g., dynamic scene motion, motion blur, rapid camera rotations).
  • Reactive Motion Generation: Utilizes a diffusion-based generative model to synthesize a temporally coherent 3D skeletal motion sequence that depicts the avatar’s naturalistic reaction to the evolving visual context. This stage is supervised via large-scale motion datasets annotated with both the agent's egocentric experiences and corresponding full-body movements.
  • Joint Spatial and Temporal Reasoning: By explicitly modeling the interplay between observed scene events (from the first-person camera) and the temporal evolution of the human body’s response, the system achieves strong spatial-temporal alignment between input video and generative outputs. This is critical for generating appropriate gaze shifts, limb articulation, and whole-body locomotion in response to visual affordances and anticipated interactions.

These architectural choices are influenced by prior advances in autoregressive and diffusion-based motion generation pipelines [Tevet et al., 2023], but are uniquely adapted for the egocentric context.

Experimental Analysis and Results

Comprehensive benchmarks are conducted on several datasets featuring human egocentric videos paired with full-body motion ground-truth. EgoReAct establishes new state-of-the-art quantitative and qualitative metrics for egocentric-driven motion generation, as measured by standard metrics such as MPJPE, FID for motion trajectories, and user preference studies.

Key findings include:

  • Superior Motion Realism and Consistency: The model synthesizes full-body movements that not only track the first-person POV changes but proactively anticipate dynamic scene events, resulting in naturalistic, semantically rich reactions during navigation, interaction, and manipulation.
  • Generalization: Ablation studies demonstrate the robustness of EgoReAct’s vision encoder and sequential diffusion module across diverse environments, subject appearances, and novel scenarios, outperforming approaches not leveraging egocentric cues or lacking explicit spatial-temporal modeling.
  • Comparison with Prior Work: The system outperforms action-conditional diffusion models [Tevet et al., 2023], systems relying on static scene or text prompts [Guo et al., 2022], and recent egocentric motion capture/forecasting pipelines [Patel et al., 2025] with significant quantitative margins.

Implications and Future Directions

EgoReAct’s formulation and results strongly suggest the viability of direct egocentric observation-driven human motion generation. This unlocks new possibilities in AR/VR telepresence, assistive avatars, and real-time virtual human control in first-person scenarios. The explicit coupling of vision and motion implies future directions in closed-loop embodied agents capable of reactive physical interaction and more human-like planning.

Potential extensions include integration with actionable scene understanding (e.g., proactive avoidance or object manipulation in cluttered environments) and crossmodal learning with audio and inertial streams. The current approach can be leveraged to train policy learning agents in simulation-to-real transfer, and inspire neuroscience models of sensorimotor contingency learning from first-person experience.

Conclusion

EgoReAct provides a technically rigorous and conceptually novel approach to egocentric-driven, 3D human reaction generation. Through spatial-temporal encoding of first-person visual streams and generative diffusion modeling, the framework sets new benchmarks in naturalistic, embodied motion synthesis, with implications for real-world interactive AI applications and further research into perceptually-grounded human–avatar interaction.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.