Papers
Topics
Authors
Recent
Search
2000 character limit reached

HRD: Human Reaction Dataset Insights

Updated 4 January 2026
  • HRD is a curated collection of empirical human responses with precise 3D spatial and temporal alignment, addressing limitations of prior datasets.
  • It employs rigorous data collection and synchronization methods, integrating video, audio, and motion metrics to study causality and adaptation in HRI.
  • The dataset enables advancements in real-time motion synthesis, adaptive failure detection, and dynamic explanation strategies in collaborative robotics.

A Human Reaction Dataset (HRD) is a curated collection of empirical data capturing human behavioral responses to external stimuli—commonly egocentric video, observed failures in human-robot interaction (HRI), or robot-explained events—annotated with detailed multimodal features for use in modeling, prediction, and generation tasks. HRDs are fundamental for studying causality, spatial-temporal alignment, and multimodal characteristics of human reactions, with applications ranging from real-time motion generation to adaptive failure detection in collaborative robotics (Zhang et al., 28 Dec 2025, Bremers et al., 2023, Khanna et al., 20 Feb 2025).

1. HRD Motivation and Scope

The core motivation for constructing an HRD arises from challenges in modeling context-sensitive, adaptive human responses—particularly the ability to generate or predict reactions that are strictly causal and precisely aligned in three-dimensional space. Prior datasets, such as ViMo, exhibit significant spatial inconsistency: dynamic motion recordings are paired with static or misaligned video, compromising the validity of spatio-temporal analyses (Zhang et al., 28 Dec 2025). HRDs address data scarcity, spatial misalignment, and the need for ecological validity in both egocentric and third-person HRI scenarios.

Representative HRDs span a range of contexts:

  • Egocentric video-reaction alignment: HRD created for "EgoReAct" solves strict causal and 3D spatial requirements by aligning egocentric video with matched reaction motion data (Zhang et al., 28 Dec 2025).
  • Bystander affect detection: BAD (Bystander Affect Detection) dataset elicits spontaneous reactions to task failure, supporting error recognition in HRI (Bremers et al., 2023).
  • Multi-modal reactions to robot failures and explanations: REFLEX captures longitudinal, annotated reactions to robot failures and varying explanation strategies, enabling the study of trust dynamics and adaptive responses (Khanna et al., 20 Feb 2025).

2. Data Collection Methodologies

HRDs deploy rigorous protocols to maximize ecological and experimental validity. Data sources typically include:

  • Stimulus/Interaction Recording: Video stimuli portraying errors (human or robot), egocentric camera feeds, or live collaborative sessions.
  • Participant Reaction Capture: Webcam recordings (e.g., online surveys in BAD (Bremers et al., 2023)), fixed or torso-mounted RGB cameras (REFLEX (Khanna et al., 20 Feb 2025)), or sensor streams capturing motion responses.
  • Post-processing and Synchronization: Videos are spatially cropped/resized (e.g., 224×224 RGB at 30 fps in BAD), multimodal streams (audio, frame-based video, derived facial/gaze features) are synchronized via timestamps or frame indices (REFLEX).

Sampling rates are dictated by hardware limitations (~4.4 Hz in REFLEX) and modality-specific requirements, ensuring per-frame synchronization across modalities.

3. Dataset Composition and Structural Elements

Key structural dimensions of representative HRDs include:

Dataset Participants Stimuli Modalities
BAD 54 46 videos Webcam video, cropped RGB frames
REFLEX 55 16 physical objects, 3 failure types, 12 events/session Audio, dual-camera RGB video, speech transcript, facial/emotional/gaze/body metrics
HRD (EgoReAct) Not specified Egocentric videos and matched 3D reaction motion Egocentric video, 3D motion trajectories

Reaction durations, number of events, and granular breakdowns diverge by task—BAD features ∼2,452 reactions across a spectrum of task failures; REFLEX captures 12 programmed failures per session, spanning strategic explanation conditions (Bremers et al., 2023, Khanna et al., 20 Feb 2025).

4. Annotation Principles and Analytical Metrics

Annotation schemas in HRDs combine automated and manual coding:

  • Event Phase Segmentation: Pre-failure, failure onset, explanation/apology, and resolution/phased assistance (REFLEX).
  • Categorical Reaction Labels: Macro-categories such as positive, negative, or skeptical—derived from facial/emotional likelihood scores (L_facee(t), L_audioe(t)), dominant category selected per event by maximizing averaged likelihoods (Khanna et al., 20 Feb 2025).
  • Manual Verification: Gaze and pose annotations manually coded and cross-validated; Cohen’s κ ≈ 0.80–0.85 attests to reliability in REFLEX.

Quantitative reaction metrics include:

  • Reaction time: Δt_resp = t_{response} – t_{failure}
  • Engagement score (Editor's term): E = α·(gaze_on_robot_duration) + β·(verbal_positivity_score), with α, β empirically determined
  • Emotion likelihoods: Vector-valued outputs over 48 classes per frame

Multimodal feature sets span facial action units (AUs), 2D/3D landmarks, arousal/valence, head pose, 24 upper-body landmarks, and gaze vectors (𝐠(t) ∈ ℝ3), organized per temporal event phase.

5. Experimental Design and Longitudinal Protocols

HRDs often implement repeated-measures and temporally adaptive design:

  • Explanation-level manipulation: Five between-subject strategies in REFLEX modulate explanation content over four repeated rounds (Fixed-Low, Fixed-Mid, Fixed-High, Decay-Slow, Decay-Rapid), enabling systematic investigation of trust repair and adaptation (Khanna et al., 20 Feb 2025).
  • Event Trials: BAD features 39–46 stimulus video exposures per participant, spanning error and control conditions (Bremers et al., 2023).
  • Statistical Analyses: Repeated-measures ANOVA, linear mixed-effects models for confusion likelihoods, and post-hoc Bonferroni paired t-tests are typical (Khanna et al., 20 Feb 2025).

Baseline trials without failures serve as controls for pre-failure engagement and reaction assessment.

6. Application Domains and Known Limitations

HRDs advance research in adaptive generation and detection:

  • Real-time reaction motion synthesis: HRD enables causal, spatially consistent motion generation in EgoReAct via VQ-VAE and autoregressive GPT frameworks, integrating 3D metric depth and head dynamics (Zhang et al., 28 Dec 2025).
  • Failure detection in HRI: BADNet, trained on BAD dataset, predicts failure occurrence via deep learning using bystander video input, achieving >90% precision (Bremers et al., 2023).
  • Trust modeling and adaptive explanations: REFLEX informs dynamic adjustment of robotic explanations by decoding user confusion/trust from multimodal cues (Khanna et al., 20 Feb 2025).

Known limitations include ecological constraints (primarily lab settings), demographic skew (e.g., university participants), and limited physiological signal collection (no biosensors). Future expansions may encompass wearable biosignal integration, expanded age ranges, richer failure/task diversity, and real-world deployment scenarios (Khanna et al., 20 Feb 2025).

7. Relevance and Future Directions in HRD Research

HRDs constitute foundational resources for next-generation human-aware, causally-grounded, and explainable collaborative systems. Their rigor in spatio-temporal alignment, multimodal annotation, and repeated-exposure protocol equips the field to train adaptive controllers, calibrate human-robot trust, and optimize explanation delivery. As ecological validity and in-the-wild applicability evolve, HRDs offer essential benchmarks for evaluating realism, consistency, and generative efficiency in human reaction modeling (Zhang et al., 28 Dec 2025, Bremers et al., 2023, Khanna et al., 20 Feb 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Human Reaction Dataset (HRD).