Identify training clips influencing specific motion patterns in generated videos

Determine, for video generative models, which individual training clips influence specific motion patterns in the models’ generated videos by attributing temporal dynamics to particular training examples, so as to explain and control how motion behavior arises from data.

Background

The paper studies how data influences motion in video generative models and introduces a motion-centric, gradient-based attribution framework. Prior work largely focused on static appearance in images, leaving temporal dynamics underexplored.

The authors note that despite architectural advances in video diffusion and related models, pinpointing which training clips shape particular motion patterns in generated outputs has been challenging. Their framework aims to directly attribute motion to training data via motion-weighted gradients, addressing this gap.

References

However, understanding which training clips influence specific motion patterns in generated videos remains an open challenge.

Motion Attribution for Video Generation  (2601.08828 - Wu et al., 13 Jan 2026) in Appendix: Extended Related Work, Subsection “Motion in Video Generation”