Papers
Topics
Authors
Recent
Search
2000 character limit reached

VIGOR: Visual Goal-In-Context Inference for Unified Humanoid Fall Safety

Published 18 Feb 2026 in cs.RO | (2602.16511v1)

Abstract: Reliable fall recovery is critical for humanoids operating in cluttered environments. Unlike quadrupeds or wheeled robots, humanoids experience high-energy impacts, complex whole-body contact, and large viewpoint changes during a fall, making recovery essential for continued operation. Existing methods fragment fall safety into separate problems such as fall avoidance, impact mitigation, and stand-up recovery, or rely on end-to-end policies trained without vision through reinforcement learning or imitation learning, often on flat terrain. At a deeper level, fall safety is treated as monolithic data complexity, coupling pose, dynamics, and terrain and requiring exhaustive coverage, limiting scalability and generalization. We present a unified fall safety approach that spans all phases of fall recovery. It builds on two insights: 1) Natural human fall and recovery poses are highly constrained and transferable from flat to complex terrain through alignment, and 2) Fast whole-body reactions require integrated perceptual-motor representations. We train a privileged teacher using sparse human demonstrations on flat terrain and simulated complex terrains, and distill it into a deployable student that relies only on egocentric depth and proprioception. The student learns how to react by matching the teacher's goal-in-context latent representation, which combines the next target pose with the local terrain, rather than separately encoding what it must perceive and how it must act. Results in simulation and on a real Unitree G1 humanoid demonstrate robust, zero-shot fall safety across diverse non-flat environments without real-world fine-tuning. The project page is available at https://vigor2026.github.io/

Summary

  • The paper introduces a unified vision-conditioned policy that integrates goal-in-context inference for comprehensive humanoid fall safety across diverse terrains.
  • It employs a factorized data approach with sparse keyframe imitation and a teacher-student architecture, enhancing sample efficiency and sim-to-real transfer.
  • Empirical evaluations show high success rates (89.5–95% safe recovery) in both simulated and physical tests, outperforming baseline methods in safety and efficiency.

Unified Vision-Conditioned Framework for Humanoid Fall Safety: A Review of VIGOR

Introduction

The challenge of fall safety for humanoid robots is characterized by complex, high-energy impacts, coupled whole-body contact dynamics, and the necessity for rapid integration of perception and reaction. Traditional solutions typically fragment the fall safety problem into discrete modules—fall avoidance, impact mitigation, stand-up recovery—or rely solely on low-dimensional proprioception, often yielding brittle policies restricted to benign, flat environments. The paper "VIGOR: Visual Goal-In-Context Inference for Unified Humanoid Fall Safety" (2602.16511) presents a unified, vision-based policy framework for all phases of humanoid fall safety, spanning uncontrolled dynamic falls through to stand-up from arbitrary configurations in unstructured terrain.

Methodological Contributions

Factorized Data Complexity and Sparse Imitation

VIGOR introduces a factorized approach towards data complexity in humanoid fall recovery, observing that human fall/recovery poses possess constrained variability and can generalize across terrains through geometric alignment. The training paradigm utilizes a small set of keyframe-based demonstrations acquired from monocular human videos on flat ground, which are retargeted onto the robot’s morphology and projected onto simulated uneven terrain using geometric heuristics. This sparse, structural supervision stands in contrast to dense trajectory imitation or heuristically shaped rewards, enabling sample-efficient behavioral learning and reducing overfitting to particular environments.

Integrated Perceptual-Motor Representation

The core innovation in VIGOR is the introduction of a goal-in-context latent representation that tightly couples the next target pose, the current body state, and local terrain geometry. Rather than learning separate modules for perception and control, or explicitly predicting terrain structure and separately planning actions, the policy is conditioned directly on a compact latent that integrates egocentric visual input (depth) and proprioceptive history. This formulation efficiently supports closed-loop, contact-rich behaviors that must react adaptively to the terrain during non-periodic, multi-phase motion.

Teacher-Student Distillation Architecture

A privileged "teacher" policy is trained using PPO RL with full-state proprioception, sparse keyframe supervision, and direct access to local terrain geometry. This teacher policy learns terrain-aware control strategies that generalize across diverse environments via extensive domain randomization. The deployable "student" distills from the teacher by reconstructing the goal-in-context latent and mimicking the teacher's actions, but with access only to onboard egocentric depth and short proprioceptive history, enabling zero-shot transfer to the physical robot without real-world finetuning.

Empirical Evaluation

Simulation Results

The policy is validated in simulation using a 23-DoF Unitree G1 model over a comprehensive spectrum of initializations (stand-up from rest and dynamic fall recovery) and non-flat, challenging terrain (rough, slope, stairs, etc.). In direct comparison to contemporary baselines (HOST [stand-up RL] and FIRM [goal diffusion]), VIGOR demonstrates a stand-up success rate of 89.5% and fall recovery success of 90.5%, surpassing both baselines by substantial margins (HOST: 15.2%, FIRM: 30.8% and 20.2%, respectively). Notably, safe success (defined by absence of head proximity to ground) is also significantly elevated. These successes are realized with lower tracking error, reduced displacement, and lower energy expenditure, indicating genuinely improved behavioral efficiency rather than compensatory aggressive recovery.

Ablation studies validate the central role of the shared goal-in-context latent, keypoint-based spatial priors, and egocentric vision (for safe, terrain-adaptive reactions) in attaining high success and safety metrics.

Real-World Transfer

Zero-shot deployment on a physical Unitree G1 confirms the sim-to-real robustness of VIGOR—achieving 88–95% safe recovery across flat, box, and stone terrains for stand-up trials, and 93%+ safe recovery across push-induced fall scenarios (including complex terrain transitions such as stairs and platforms). Policies lacking depth input suffered marked drops in safety and robustness under asymmetric contacts and non-flat conditions, demonstrating the necessity of vision for reliable fall safety in real-world deployment.

Vision and Latent Goal Representation

The effect of egocentric vision is most pronounced not in flat scenarios (where proprioceptive-only policies may suffice), but in geometric/structural generalization—enabling the robot to perform terrain-adaptive support selection and safe contact under unstructured, high-dimensional configurations. The goal-in-context latent effectively encapsulates all information required for robust in-situ decision making without explicit geometric parsing or long-horizon trajectory planning.

Implications and Future Directions

The structured teacher-student framework, together with data factorization and goal-in-context inference, represent a robust architecture paradigm for other whole-body latent-conditioned robot behaviors that require integration of exteroceptive sensing. By outperforming strong proprioceptive and imitation learning baselines on both simulated and real hardware, the work provides concrete evidence that sparse, keyframe-based human priors, when combined with vision-conditioned RL and distillation, scale effectively to unconstrained, contact-rich domains.

Practically, unified fall recovery as achieved by VIGOR removes the hard boundary between falling, impact, and standing up—enabling humanoids to robustly survive and recover from diverse disturbances in the wild, crucial for both autonomy and safety in real-world robotics deployments.

Theoretically, the latent distillation strategy points to a general mechanism for integrating hierarchical perceptual-motor policies in high-DoF robots. Moreover, the factorized view of data complexity (separating motion structure from terrain variety) offers a path to scalable behavioral learning with minimal demonstration burden, with immediate application to other contact-rich behavioral domains.

Potential extensions may focus on coupling VIGOR's fall-safety policy with real-time locomotion, integrating proactive fall-avoidance (rather than solely reactive recovery) into a single policy, or enriching the goal-in-context latent for more highly multimodal or dynamic tasks.

Conclusion

VIGOR establishes a unified, vision-conditioned framework for humanoid fall safety, combining structurally efficient human motion guidance, goal-in-context perceptual-motor representations, and structured teacher-student RL for robust and transferable recovery policies. Both simulation and real-world results evidence substantial gains in both success and safety, with strong practical and theoretical implications for scalable whole-body control in humanoid robotics (2602.16511).

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

GitHub

  1. VIGOR 

Tweets

Sign up for free to view the 1 tweet with 15 likes about this paper.