Papers
Topics
Authors
Recent
Search
2000 character limit reached

UniVLG: Unified 2D/3D Vision-Language Architecture

Updated 5 February 2026
  • UniVLG is a unified architecture for 2D and 3D vision-language understanding that integrates pre-trained encoders with 3D sensory data.
  • It employs lightweight 3D attention and a novel language-conditioned mask decoder to directly predict segmentation masks and textual grounding without mesh reconstruction.
  • The design benefits from co-training on diverse 2D and 3D datasets, yielding robust cross-modal grounding and a 2–3% performance boost over standard methods.

UniVLG is a unified architecture for 2D and 3D vision-language understanding that bridges 2D-centric models and embodied 3D sensory data. It leverages pre-trained 2D vision-language encoders, lightweight 3D attention, and a novel language-conditioned mask decoder to achieve performant cross-modal grounding and instance segmentation in both 2D (RGB) and 3D (RGB-D/pointcloud) settings—eliminating the dependence on mesh reconstruction or ground-truth object proposals for realistic, embodied-aligned evaluation (Jain et al., 13 Mar 2025).

1. Network Structure and Modular Components

UniVLG operates on either a set of NN posed RGB-D views of a scene or a single RGB image that can be lifted to a 3D pointmap. It ingests a natural-language query (e.g., "the red chair next to the table") and processes inputs via a series of tightly integrated modules:

  • 2D Vision Encoder: Typically a frozen or lightly fine-tuned ViT-like backbone (e.g., DINOv2 ViT or Swin) produces image patch features V2DR(HW)×DV_{2D} \in \mathbb{R}^{(H\cdot W) \times D}.
  • 3D Attention Layers: k-NN self-attention across 3D points (from unprojected sensor depth or monocularly estimated depth) with relative positional embeddings based on (x,y,z)(x, y, z) coordinates, fusing multi-view information into VRM×DV \in \mathbb{R}^{M \times D}.
  • Language Encoder: A frozen CLIP-style text encoder (Jina-CLIP) generates textual features TRL×DT \in \mathbb{R}^{L \times D}.
  • Language-Conditioned Mask Decoder: Operates on a set of KK learnable queries Q(0)RK×DQ^{(0)} \in \mathbb{R}^{K \times D} using alternating masked cross-attention, self-attention, and feature update steps to directly predict segmentation masks and ground-text spans.
  • Text Decoder: For open-ended question answering, a small T5 decoder conditions on refined queries Q(F)Q^{(F)} to generate free-form answers.

The architecture integrates these modules as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
encode_language(text) → T
encode_images(views) → V2D
lift_or_unproject_depth(depth or pred_depth) → XYZ coords
V ← 3D_kNN_Attn(V2D, XYZ)
Q^{0} ← init_queries()
for i in 0…N_layers−1:
  X ← concat(Q^{i}, T)
  X ← MaskedCrossAttn(X, V) + X
  X ← SelfAttn(X) + X
  V ← CrossAttn(V, X) + V
  Q^{i+1} ← X[:, :K]
masks ← decode_masks(Q^{F}, V)
spans ← decode_spans(Q^{F}, T)
if VQA: text_ans ← T5_Decoder(Q^{F})
(Jain et al., 13 Mar 2025)

2. Model Initialization and Training Paradigm

  • Pre-trained 2D Vision-Language Backbones: The 2D vision encoder is initialized from internet-scale, pre-trained ViT or Swin models (CLIP, DINOv2). The language encoder is a frozen CLIP-style tower.
  • Transfer to 3D: 3D layers (transformer self/cross-attention) are initialized by copying from 2D modules, with added relative (x,y,z)(x, y, z) position biases. The object-query heads and MLP predictions for the mask decoder are newly initialized.
  • Parameterization: Typical hyperparameters include 108M trainable parameters, 220M frozen text-encoder, 304M frozen DINOv2 vision-encoder, and 6–8 decoder layers.
  • Training Data: Co-trained jointly on large-scale 2D datasets (RefCOCO/g, COCO) and multiple 3D datasets (SR3D, NR3D, ScanRefer, ScanQA, SQA3D, ScanNet200, Matterport3D), using multi-task sampling in each gradient step (Jain et al., 13 Mar 2025).

3. Language-Conditioned Mask Decoder and Objective Design

The mask decoder is central to UniVLG's unified approach:

  • Decoder Workflow: At each layer, the decoder alternately applies masked cross-attention (from queries+text to visual tokens), self-attention among queries+text, and feedback cross-attention (from visual to queries+text). Learnable mask queries are updated at every step.
  • Output: Final query states Q(F)Q^{(F)} are used to predict masks, text grounding spans, and—when available—free-form answers.
  • Loss Terms:
    • Lmask\mathcal{L}_{\text{mask}}: Binary cross-entropy and Dice loss on segmentation masks.
    • Ltext\mathcal{L}_{\text{text}}: Binary cross-entropy on grounding logits between final queries and text tokens.
    • Lbox\mathcal{L}_{\text{box}}: L1 and GIoU loss on predicted mask boxes vs. ground-truth (applied only to real 3D or clean 2D).
    • Lgen\mathcal{L}_{\text{gen}}: Cross-entropy for VQA answer generation.
  • Unified Multimodal Decoding: The same decoder parameters are employed for both 2D and 3D grounding, QA, and segmentation (Jain et al., 13 Mar 2025).

4. 2D-to-3D Lifting and Multi-Modal Feature Association

UniVLG leverages abundant 2D vision-language data for improved 3D learning by employing explicit 2D-to-3D lifting strategies:

  • Monocular Lifting: Depth for each pixel is estimated by a monocular network (e.g., Moge), then used to unproject image features into a 3D pointmap: p(u,v)=K1[u,v,1]Td(u,v)p(u,v) = K^{-1} [u,v,1]^T d(u,v).
  • Feature Binding: Each 2D patch feature z(u,v)z(u,v) is paired with a 3D location p(u,v)p(u,v), creating positional-embedded visual tokens for 3D attention.
  • Training/Inference Regimes: At train time, half of 2D batches are processed in pure 2D mode, the other half are subjected to lifting and 3D attention; inference retains pure 2D paths for image-only tasks to avoid depth noise.

This design considerably reduces the 2D/3D domain gap, allowing for effective cross-domain knowledge transfer and empirically improving 3D grounding performance by 2–3% with no drop in 2D performance (Jain et al., 13 Mar 2025).

5. Unified Co-Training Across Modalities and Tasks

  • Multi-Task Batching: Every training minibatch samples across 2D and 3D datasets, blending referential grounding, segmentation, and VQA examples.
  • Unified Decoder/Objective: The mask-decoder architecture and loss are shared across all tasks, with minor exceptions (e.g., omitting box prediction on lifted 2D data, activating VQA losses only on applicable data).
  • No Hand-Tuned Task Weights: Balance is achieved automatically by sharing heads and decoder without manual tuning except for a set of λ\lambda coefficients.
  • Empirical Results: This co-training enhances 3D understanding without degrading 2D capabilities (Jain et al., 13 Mar 2025).

6. Architectural Innovations and Realistic Evaluation Protocol

UniVLG departs from prior vision-language grounding pipelines by introducing several key innovations:

  • No Mesh Reconstruction: Operates directly on raw sensor depth or viable monocular pointclouds, bypassing slow mesh post-processing.
  • No External Proposal Generation: Eschews two-stage box-based methods in favor of direct single-stage mask prediction.
  • Embodied-Aligned and Robust Evaluation: All benchmarks are evaluated on sensor-derived pointclouds (not preprocessed/synthetic meshes), with GT boxes omitted at test time. Baselines are re-trained on the same sensor-derived inputs for unbiased comparison.
  • Inference Efficiency: 90-frame inference is completed in ~1.05 s with 15 GB VRAM on A100 hardware.
  • Robustness: Combination of relative 3D k-NN attention and strong 2D features confers resilience to pose and depth noise (Jain et al., 13 Mar 2025).
Innovation Prior Methods UniVLG Approach
Mesh dependency Mesh reconstruction Raw pointcloud
Object proposal Two-stage (box-based) Single-stage mask decoder
Evaluation input Synthetic mesh samples Sensor pointcloud
Training regime Modality-specific Unified multi-modal

7. Significance and Future Prospects

By unifying 2D and 3D vision-language grounding, UniVLG demonstrates that strong pre-trained 2D representations, when systematically extended via lightweight 3D self-attention and an adaptive language-conditioned mask decoder, yield state-of-the-art results across a broad spectrum of 2D and 3D tasks. This architecture obviates the need for mesh processing and proposal generation, directly aligning model inputs and evaluation protocols with real-world, embodied perception settings. The approach opens pathways for further generalization, including more diverse multi-modal co-training, integration with additional sensory modalities, and adaptation to downstream embodied-agent tasks in the presence of limited 3D annotations (Jain et al., 13 Mar 2025).

For additional context on unified multi-view frameworks for long video generation, see the related "UniMLVG" architecture (Chen et al., 2024), though this is distinct from the 2D/3D vision-language grounding paradigm described above.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to UniVLG Architecture.