Handling dynamic objects in 3D Gaussian Splatting (3DGS) consolidation

Develop 3D Gaussian Splatting (3DGS) techniques that robustly model and render dynamic objects during 3D consolidation, achieving consistent multi-view reconstruction and rendering in dynamic scenes rather than only static scenes.

Background

The proposed pipeline consolidates densified X-images and RGB views in a unified 3D space using 3D Gaussian Splatting (3DGS), relying on COLMAP poses for RGB. While effective for static scenes, the authors note that dynamic objects pose a challenge for 3D consolidation, which current 3DGS methods typically assume to be static.

The limitation highlights a broader research gap: extending 3DGS beyond static radiance fields to handle scene dynamics, ensuring temporal and multi-view consistency when objects move or deform. Addressing this would enable cross-sensor view synthesis and consolidation in more realistic, dynamic environments.

References

First, the work focuses on static scenes and does not address dynamic objects, which is an issue in 3D consolidation and is still open for 3DGS research.

No Calibration, No Depth, No Problem: Cross-Sensor View Synthesis with 3D Consistency  (2602.23559 - Wu et al., 27 Feb 2026) in Conclusion, Limitations