Papers
Topics
Authors
Recent
Search
2000 character limit reached

Spatiotemporal Inpainting: Techniques & Applications

Updated 31 January 2026
  • Spatiotemporal inpainting is the process of reconstructing missing or masked video content to ensure both spatial detail and temporal continuity.
  • It employs methodologies such as patch-based optimization, optical flow estimation, and deep-learning architectures to synthesize realistic video sequences.
  • Recent advances integrate diffusion models and transformer architectures to handle dynamic textures and complex motion while maintaining coherence.

Spatiotemporal inpainting is the task of reconstructing visually and temporally coherent content in missing or masked regions of a video sequence, encompassing both spatial (pixel/region-based) and temporal (across frames) dimensions. The field unites methodologies from patch-based optimization, motion-guided synthesis, deep neural architectures, and generative diffusion models. Spatiotemporal inpainting underpins applications in film post-production, restoration, scene editing, simulation, and even scientific inference—where physically plausible, time-evolving fields must be inferred from sparse or incomplete spatiotemporal data.

1. Mathematical Formulation and Core Problem

Let u:Ω→R3u: \Omega \to \mathbb{R}^3 denote a color video, where (x,y,t)↦u(x,y,t)(x, y, t) \mapsto u(x, y, t), with spatiotemporal domain Ω⊂Z3\Omega \subset \mathbb{Z}^3. The inpainting domain H⊂ΩH \subset \Omega (the "hole") is specified, with available data D=Ω∖HD = \Omega \setminus H. The objective is to synthesize uu over HH, so that the result is visually plausible, locally consistent (spatially), and globally coherent in time, that is, with respect to object trajectories, appearance, and dynamic texture.

This general setup extends to arbitrary data modalities (RGB, depth, physical fields), partial sensor patterns, and both structured ("missing frames") and unstructured ("moving object removal") regions—a scenario explicitly addressed in recent diffusion-based frameworks (Wan et al., 2024, Li et al., 16 Jun 2025).

2. Patch-Based and Optimization Approaches

Early global approaches cast spatiotemporal inpainting as non-local patch-based energy minimization (Newson et al., 2015). The problem is formulated as

E(u,φ)=∑p∈Hd2(Wp,Wp+φ(p))E(u, \varphi) = \sum_{p \in H} d^2\left(W_p, W_{p+\varphi(p)}\right)

where

  • WpW_p is the spatiotemporal patch around location p=(x,y,t)p = (x, y, t),
  • φ\varphi is a shift/correspondence map with p+φ(p)∈Dp + \varphi(p) \in D,
  • d2d^2 the â„“2\ell^2 patch distance, possibly augmented with local texture features.

The energy is minimized by alternating:

  • Approximate Nearest Neighbor (ANN) patch search in 3D (PatchMatch extension).
  • Weighted aggregation-based pixel reconstruction, with a final "best patch" step for sharp synthesis.

Key innovations enable robust handling of dynamic textures, multiple objects, and moving backgrounds:

  • Texture descriptors (averaged image gradients) ensure structure-aware matching and improve texture synthesis.
  • Global affine stabilization preprocesses mobile camera or background motion via framewise affine alignment; inpainting is executed in the stabilized domain (Newson et al., 2015).
  • Multi-scale pyramid: only spatial downsampling is performed per level; time remains intact for temporal coherence.

Limitations include difficulty with long occlusions, extremely large or "drifting" holes, and a reliance on matching existing content elsewhere in the sequence (Newson et al., 2015).

3. Optical Flow and Motion-Guided Inpainting

Optical flow estimates are foundational for propagating appearance information spatiotemporally. A prominent class of methods (Xu et al., 2019) proceeds as follows:

  • Flow completion: A deep network (cascade of ResNet-based subnetworks) predicts dense optical flow within holes, using coarse-to-fine refinement and "hard example mining" to focus learning on difficult, ambiguous regions.
  • Guided pixel propagation: Known pixels are copied along completed flow fields to fill missing regions; candidate pixels from both forward and backward directions are blended via distance-based weights.
  • Residual hole handling: Pixels unreached by propagation are filled with a single-frame inpainting network.

Methods leveraging flow-guided propagation achieve high spatial fidelity and temporal stability, provided that flow estimation is sufficiently accurate. Unfilled regions tend to occur where the true content is never visible ("disocclusion") or flow estimation fails (e.g., due to fast motion) (Xu et al., 2019). To address this, "scene template" models (Lao et al., 2021) enforce global spatiotemporal consistency by jointly optimizing a 2D background template and consistent per-frame warps, then inpainting via an L2L_2-L1L_1 temporal-interpolation scheme.

4. Deep Learning Architectures for Spatiotemporal Inpainting

Modern methods predominantly employ deep convolutional or transformer-based architectures that explicitly encode spatial and temporal reasoning:

  • Multi-Stream Encoders and Temporal Fusion: Methods such as VINet (Kim et al., 2019) encode temporal context from a window of neighboring frames, learn featurewise motion alignment, and introduce recurrent feedback (ConvLSTM) to propagate and aggregate temporal memory. Feature-level aggregation often involves learned flow-guided warping and composition masks, with skip-connections to preserve spatial detail.
  • Convolutional LSTM and Frame-Recurrent Pipelines: Frame-recurrent approaches (Ding et al., 2019) combine per-frame inpainting with temporal refinement using ConvLSTM units. Flow is estimated from both filled and corrupted frames, blended by a small U-Net module, and used for warping previous outputs, ensuring efficient streaming capability and maintaining temporal coherence.
  • Transformer-Based Architectures: Transformers have enabled efficient and accurate modeling of nonlocal dependencies (Liu et al., 2021, Yu et al., 2023). Decoupled architectures perform attention separately along the temporal and spatial axes—temporally-decoupled blocks capture motion and temporal propagation in spatial zones, while spatially-decoupled ones reconstruct local texture. Token-selection and mask-activation strategies, as seen in DMT (Yu et al., 2023), further accelerate inference by pruning irrelevant (fully-masked) tokens.

Ablation studies consistently demonstrate that integration of explicit alignment modules (e.g., feature flow networks, cascaded attention) and recurrent memory are critical for long-term temporal stability, minimizing flicker, and maintaining structural consistency across frames (Kim et al., 2019, Ding et al., 2019, Liu et al., 2021).

5. Generative Diffusion Models and Unified Space-Time Inpainting

Recent frameworks employ diffusion-based models to unify spatial, temporal, and even multimodal inpainting tasks within a single generative paradigm:

  • Unified architecture: UniPaint (Wan et al., 2024) introduces a plug-and-play Space-Time Inpainting Adapter and Mixture-of-Experts (MoE) attention, allowing the model to adaptively route computation according to mask structure—spatial, marginal, or purely temporal (interpolation) masks. Training on mixed mask types enables robust generalization to diverse space-time inpainting tasks.
  • Mask-conditioning and MoE routing: By concatenating noisy input, VAE-encoded masked video, and mask, and then applying routing over multiple expert FFNs per layer (using the downsampled mask as gating input), computational focus dynamically matches the inpainting scenario. Deterministic per-expert routing ablations confirm specialization by expert, e.g., spatial inpainting versus temporal interpolation.
  • Training objectives: Losses are primarily standard denoising diffusion (â„“2\ell_2) objectives, eschewing explicit perceptual or adversarial terms; temporal consistency emerges from mask diversity and architectural depth.
  • Applications: VideoPDE (Li et al., 16 Jun 2025) generalizes this paradigm to scientific machine learning, recasting PDE-solving as spatiotemporal inpainting—by conditioning on arbitrary observed sensor layouts and inferring missing data across time and space in a single forward pass.
  • Advantages and limitations: Diffusion frameworks offer flexibility and strong generative priors, excelling at inpainting, outpainting, and temporal interpolation in a unified manner. Remaining challenges include handling highly dynamic, out-of-distribution motion and scaling to very high resolutions or long-duration sequences (Wan et al., 2024, Li et al., 16 Jun 2025).

6. Performance Evaluation and Benchmarks

Evaluation of spatiotemporal inpainting employs a range of quantitative and qualitative metrics:

Metric Description Used In
PSNR / SSIM Signal fidelity, per-frame or inside hole (Zou et al., 2021)
VFID / LPIPS Perceptual similarity, video-based FID (Zou et al., 2021, Wan et al., 2024)
Flow warping error Temporal consistency (error in flow-warped reconstructions) (Kim et al., 2019)
User studies Subjective rankings for realism and stability (Xu et al., 2019, Kim et al., 2019)

Notable outcome highlights:

  • In (Zou et al., 2021), PTFA achieves PSNR 35.40, SSIM 0.9129, and VFID 0.5927 on the DAVIS benchmark, surpassing STTN and FFVI baselines.
  • UniPaint (Wan et al., 2024) achieves BP 41.8, TA 31.8, and TC 97.5 on DAVIS, with ablations supporting the impact of mixed-mask training and MoE routing.
  • VideoPDE (Li et al., 16 Jun 2025) reaches L2 error 0.44% on Navier–Stokes forward prediction, outperforming prior diffusion and operator learning models.

Ablations and qualitative results repeatedly emphasize that mask diversity, adaptive feature alignment, recurrence (temporal memory), and specialized attention mechanisms are critical for achieving state-of-the-art performance and temporal coherence.

7. Current Limitations and Research Directions

Persistent limitations in spatiotemporal inpainting include:

  • Long occlusions: Extremely long-term holes may lead to failure or unstable recovery, especially if no reference content exists in the unmasked region (Newson et al., 2015, Zou et al., 2021).
  • Motion and composition diversity: Scenes with highly dynamic or non-rigid motion, extremely complex object interactions, or novel content can induce hallucination artifacts, drift, or temporal inconsistency (Newson et al., 2015, Wan et al., 2024).
  • Scalability: Transformer and dense attention models are constrained by memory overhead when deployed on full-resolution or long-duration videos; hierarchical or token-pruned schemes partially address this (Yu et al., 2023, Srinivasan et al., 2021).
  • Unified modeling: Diffusion-based and transformer methods are closing the gap among spatial, temporal, and even multi-modal inpainting, yet domain gaps remain—especially when transferring pre-trained image inpainting priors to video or scientific data (Yu et al., 2023, Li et al., 16 Jun 2025).

Emerging directions include:

Collectively, spatiotemporal inpainting research demonstrates a broad spectrum of mathematical and algorithmic innovations that are rapidly bridging spatial, temporal, physical, and semantic reasoning in video and dynamic scene understanding.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Spatiotemporal Inpainting.