Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multi-View Pyramid Transformers (MVP)

Updated 10 February 2026
  • Multi-View Pyramid Transformers are scalable, hierarchical attention architectures designed to reconstruct accurate 3D scenes from numerous images in one forward pass.
  • They combine local, group-wise, and global self-attention to merge spatial tokens progressively, preserving fine geometric details while boosting efficiency.
  • MVP integrates with 3D Gaussian Splatting to predict 3D primitives, achieving state-of-the-art performance on dense neural reconstruction benchmarks.

Multi-View Pyramid Transformers (MVP) are a class of scalable multi-view transformer architectures designed to directly reconstruct large-scale 3D scenes from tens to hundreds of images in a single forward pass. MVP achieves this by combining a local-to-global inter-view attention hierarchy, which grows the receptive field from per-frame to global, with a fine-to-coarse intra-view hierarchy that progressively merges tokens spatially while increasing feature dimensionality. The architecture natively integrates with dense and generalizable 3D scene representations, such as 3D Gaussian Splatting, enabling efficient, high-fidelity reconstructions with a high degree of scalability and computational efficiency (Kang et al., 8 Dec 2025).

1. Architectural Design and Dual Hierarchy

The MVP architecture is a pure transformer comprising a three-stage pyramid where self-attention is organized both across and within views.

  • Input Processing: Each posed image IiRH×W×3I_i \in \mathbb{R}^{H \times W \times 3} is augmented with its corresponding Plücker-ray map PiRH×W×9P_i \in \mathbb{R}^{H \times W \times 9}, producing a composite 12-channel tensor. A linear patch-embedding projects non-overlapping p×pp \times p patches of this tensor to an initial dimension d0d_0, resulting in token sets T0RN(HW/p2)×d0T_0 \in \mathbb{R}^{N \cdot (HW/p^2) \times d_0}, with four additional learnable "register" tokens per view.
  • Three-stage Pyramid: Each stage ss processes tokens TsRNhsws×dsT_s \in \mathbb{R}^{N \cdot h_s \cdot w_s \times d_s}, where
    • hs+1=hs/2h_{s+1} = h_s / 2, ws+1=ws/2w_{s+1} = w_s / 2, and ds+1=2dsd_{s+1} = 2d_s.
    • A 3×33 \times 3 convolution with stride 2 merges spatial neighborhoods, and the feature dimension is doubled.
    • Each stage alternates among:
    • Frame-wise self-attention: Applied to each view independently;
    • Group-wise self-attention: Applied within small groups (MM views per group, typically M=4M=4) to capture mid-range dependencies;
    • Global self-attention: Applied when M=NM=N (all views), generally in the last block.
  • Pyramidal Feature Aggregation (PFA): Feature maps from all three stages are fused top-down using explicit upsampling and small residual conv blocks:

F=fuse(up(fuse(up(F3)+F2))+F1)F = \mathrm{fuse}\left( \mathrm{up}\big( \mathrm{fuse}(\mathrm{up}(F^3) + F^2 ) \big) + F^1 \right)

where FsF^s are spatial maps from each stage.

This dual-hierarchical arrangement ensures that local geometric detail is preserved (through frame-wise attention), medium-range consistency is enforced (via group-wise attention), and global coherence is achieved efficiently at low spatial resolution.

2. Mathematical Foundations and Attention Mechanisms

The core operation of MVP's transformer backbone is scaled dot-product attention. For query QRn×dkQ \in \mathbb{R}^{n \times d_k}, key KRm×dkK \in \mathbb{R}^{m \times d_k}, and value VRm×dvV \in \mathbb{R}^{m \times d_v}:

Attention(Q,K,V)=softmax(QK/dk)V\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(QK^\top / \sqrt{d_k}) \cdot V

Inter-view grouping is realized via reshaping and grouping operators:

  • Given tokens TRNhw×dT \in \mathbb{R}^{N \cdot h \cdot w \times d}, grouping into GR(N/M)×(Mhw)×dG \in \mathbb{R}^{(N/M) \times (M \cdot h \cdot w) \times d} is performed.
  • Within each group ii:
    • Frame-wise: Gi,jAttn(Gi,j,Gi,j,Gi,j)G_{i,j} \leftarrow \mathrm{Attn}(G_{i,j}, G_{i,j}, G_{i,j})
    • Group-wise: TiAttn(concatjGi,j,concatjGi,j,concatjGi,j)T'_i \leftarrow \mathrm{Attn}(\mathrm{concat}_j G_{i,j}, \mathrm{concat}_j G_{i,j}, \mathrm{concat}_j G_{i,j})

Complexity analysis reveals MVP's efficiency:

  • Full global attention is O((Nt)2d)\mathcal{O}((Nt)^2 d)
  • Frame-wise and group-wise reduce this, with O(Nt2d)\mathcal{O}(N t^2 d) and O(NMt2d)\mathcal{O}(N M t^2 d), respectively. At high spatial resolutions, the architecture relies on local and group-wise attention, only switching to global when spatially downsampled.

3. Integration with 3D Gaussian Splatting

After PFA, the feature map is re-tokenized and fed into a linear head which predicts, for each image pixel (u,v)(u, v), the parameters of a 3D Gaussian primitive:

  • Center μR3\mu \in \mathbb{R}^3, scale sR+s \in \mathbb{R}_+, rotation quaternion qq, opacity α\alpha, RGB color cc.
  • Spherical harmonic coefficients (up to degree 1 for color, degree 2 for opacity) are also predicted, capturing view-dependent effects.
  • Rendering mirrors 3D Gaussian Splatting (3DGS), but MVP infers all primitives in a single pass, obviating iterative fitting.

The optimization objective includes:

  • Photometric loss:

Limg=iTRender({Gaussians},posei)Ii22+λPerceptual()\mathcal{L}_{\mathrm{img}} = \sum_{i \in \mathcal{T}} \| \mathrm{Render}(\{\mathrm{Gaussians}\}, \mathrm{pose}_i) - I_i \|_2^2 + \lambda \, \mathrm{Perceptual}()

  • Opacity regularization:

Rα=1NGjσ(αjωj)\mathcal{R}_\alpha = \frac{1}{N_G \sum_j |\sigma(\alpha_j \omega_j)|}

  • Total loss: L=Limg+γRα\mathcal{L} = \mathcal{L}_{\mathrm{img}} + \gamma \mathcal{R}_\alpha.

4. Scalability and Implementation

MVP is engineered for scalability in both resolution and the number of input images:

  • Patch sizes: p=(8,16,32)p = (8, 16, 32) across the three stages; token counts per view drop accordingly.
  • Feature dimensions: d=(256,512,1024)d = (256, 512, 1024) in successive stages.
  • Block allocation per stage: [2 frame-wise, 4 group-wise, 8 global].
  • Group size: M=4M=4 views, identified via ablations as best balancing context aggregation and computational cost.
  • Input up to 256 images at 960×540960 \times 540 processed in under 2 seconds on an NVIDIA H100 GPU with FlashAttention 3.
  • Training curriculum: three phases—low-resolution 32 views, followed by high-resolution 32 views, then multi-view mixing, with stage-wise freezing.
  • Positional encoding combines Plücker-ray concatenation and PRoPE relative pose encoding.

5. Empirical Performance and Ablation Studies

MVP demonstrates state-of-the-art performance on several dense neural reconstruction benchmarks:

  • DL3DV (960×540):
    • N=16N=16: PSNR $23.76$, SSIM $0.798$, LPIPS $0.239$, runtime $0.09$ s (vs. iLRM $21.92$, Long-LRM $21.05$)
    • N=64N=64: PSNR $27.73$, SSIM $0.881$, LPIPS $0.154$, time $0.36$ s
    • N=128N=128: PSNR $29.02$, SSIM $0.903$, LPIPS $0.134$, time $0.77$ s
    • For N=192,256N=192, 256: MVP achieves 29.6\sim29.6 dB in $1$–$2$ s (iLRM 21\sim21 dB in $12$–$21$ s; Long-LRM out-of-memory)
  • Generalization: On zero-shot Mip-NeRF360 and Tanks and Temples, MVP outperforms iLRM and Long-LRM by over $1$ dB PSNR and achieves superior SSIM and LPIPS.
  • Low-res RE10K (256×256, N=4/8N=4/8): PSNR $32.12$–$33.40$, outperforming iLRM by $1.7$–$4.5$ dB.

Ablation studies reveal critical contributions:

  • Removing PFA degrades PSNR from 23.1821.5823.18 \to 21.58 (1.6-1.6 dB).
  • Replacing group-wise with frame-wise attention: 23.1822.5323.18 \to 22.53.
  • Replacing group-wise with all-global: 23.1822.9423.18 \to 22.94 (significantly slower and OOM at N=256N=256).
  • Eliminating intra-view hierarchy (-spatial pyramid): $22.83$; out-of-memory at high NN.
  • Reversal of hierarchy order (coarse-to-fine): $18.95$ dB.
  • Longer-context generalization: Training on 32 views, testing on 40/48 yields +0.48/1.18+0.48/1.18 dB gain (baselines saturate at <0.5<0.5 dB).

MVP is part of a broader movement integrating hierarchical and multi-view transformers for visual representation learning. While "Multiview Transformers for Video Recognition" (MTV) (Yan et al., 2022) similarly exploits pyramidal multi-view tokenization—using parallel transformers for different spatio-temporal scales and cross-view attention—the MVP framework is tailored for spatial multi-view (multi-camera) 3D scene understanding rather than temporal video understanding. Both approaches establish that fusing information across granularities inside transformer backbones yields substantial gains in computational efficiency and representation quality compared to deepening single-view transformers or resorting to uniformly global attention. A plausible implication is that hierarchical attention mechanisms may generalize across input modalities and data domains where both local detail and global consistency are required.

7. Significance and Impact

MVP establishes a new state-of-the-art in scalable, generalizable 3D scene reconstruction from images, providing not only performance but empirical evidence on efficient attention allocation between local, medium, and global view-groupings. The architectural choices underpinning MVP—particularly its dual pyramid hierarchy and tailored fusion modules—enable real-time or near-real-time scene modeling from dense view collections at high spatial resolutions, pushing transformer-based approaches beyond prior memory-bound and compute-bound regimes (Kang et al., 8 Dec 2025). The synergy observed between MVP and modern scene representations such as 3D Gaussian Splatting illustrates the promise of explicitly structured multi-view transformers for geometric computer vision and, potentially, for other high-dimensional spatial inference problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Multi-View Pyramid Transformers (MVP).