Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sparse 4D Indoor Semantic Instance Segmentation

Updated 23 January 2026
  • Temporally sparse 4D SIS is a method that segments and tracks indoor objects across time, tackling challenges of scene evolution with sparse scan intervals.
  • It employs object-centric models and transformer-based architectures to maintain consistent instance identities despite long temporal gaps and dynamic changes.
  • Quantitative evaluations show significant improvements in temporal mAP, highlighting its potential for robust robotics, AR/VR, and persistent mapping applications.

Temporally sparse 4D indoor semantic instance segmentation (SIS) refers to the problem of jointly segmenting, identifying, and temporally associating object instances in indoor environments based on a sequence of 3D scans acquired at widely separated, non-dense temporal intervals. Unlike classical 3D semantic instance segmentation (3DSIS), which focuses on one scan at a time, or 4D LiDAR segmentation, which uses high-frequency sequential observations, temporally sparse 4D SIS must handle long unobserved periods during which substantial scene changes (object movement, occlusion, addition, removal, or deformation) can occur. State-of-the-art approaches construct object-centric models or adapt transformer-based architectures to propagate and maintain consistent instance identities across arbitrary time steps, establishing new benchmarks for the persistent mapping and tracking of evolving indoor scenes (Halber et al., 2019, Steiner et al., 16 Jan 2026).

1. Formal Problem Definition and Task Setting

Temporally sparse 4D SIS takes as input a sequence of TT temporally distinct, globally-aligned 3D scans P={P(1),...,P(T)}\mathcal{P} = \{ P^{(1)}, ..., P^{(T)} \} of a single indoor environment, where each P(t)P^{(t)} can contain a variable number of points NtN_t and objects may have moved, been added or removed. The core objective is to segment all points across all scans into consistent, temporally coherent semantic instance masks M={m1,...,mK}\mathcal{M} = \{ m^1, ..., m^K \}, each with a semantic label cc, such that each mk{0,1}Nm^k \in \{0,1\}^{N} (where N=tNtN = \sum_t N_t) covers an object with a unique identity throughout its visible lifetime, despite sparse temporal sampling and unobserved scene evolution.

Temporal sparsity is central: scans are captured at intervals ranging from days to months, precluding use of continuous motion models, dense flow, or interpolation schemes standard in LiDAR-based 4D SIS. Scenes evolve with large, sometimes unobserved, changes; objects may appear, disappear, move, or deform from one scan to the next (Steiner et al., 16 Jan 2026).

2. Scene Modeling Strategies

Two principal modeling paradigms have demonstrated efficacy in temporally sparse 4D SIS:

a) Inductive Instance-Centric Object Models

The Rescan approach formalizes the temporal scene state at scan tit_i as a model Mi={O,A}M_i = \{\mathcal{O}, \mathcal{A}\}, where:

  • O={o0,...,on}\mathcal{O} = \{o_0, ..., o_n\} is the set of object instances, each ok={uk,Gk,ck}o_k = \{u_k, G_k, c_k\} holding a unique ID, a 3D mesh or point cloud GkG_k, and a semantic label ckc_k,
  • A={A0,...,Ai}\mathcal{A} = \{A_0, ..., A_i\} logs the arrangements (poses aij={uj,Tj,sj}a_i^j = \{u_j, \mathbf{T}_j, s_j\}) of all O\mathcal{O} over time,
  • Tj\mathbf{T}_j is a rigid 6-DOF transform placing an object model into the current scan geometry.

By maintaining an explicit, additive library of object shapes and trajectories, the system enables both retrospective and prospective tracking of object identities across arbitrary temporal intervals (Halber et al., 2019).

b) End-to-End Spatio-Temporal Query Transformers

The ReScene4D approach adapts transformer-based query architectures from 3DSIS to tackle the temporally sparse 4D task. It concatenates all TT scans into a unified 4D voxel grid VZK0×4\mathcal{V} \in \mathbb{Z}^{K_0 \times 4} at fixed resolution, encoding (x,y,z,t)(x,y,z,t), and extracts multi-scale features from each scan using sparse convolutional or pre-trained encoders (e.g., Minkowski Sparse-UNet, PTv3). A set of NqN_q learned queries is shared across all scans, and the decoder uses masked cross-attention and self-attention to predict a set of temporally consistent instance masks and their semantics (Steiner et al., 16 Jan 2026).

3. Algorithmic Pipelines

The main algorithmic pipelines are summarized in the following table:

Approach Core Stages Temporal Coherence Mechanisms
Rescan Data preprocessing → pose proposal (multi-resolution ICP grid search) → arrangement optimization (global obj.) → segmentation transfer (nearest-neighbor & graph-cut) → geometry fusion (Poisson reconstruction) Explicit object models & arrangement history, hysteresis term in objective
ReScene4D 4D voxelization → feature extraction (frozen or trained backbone) → mask transformer decoder (cross-time masked attention) → joint mask/semantic prediction Shared queries, cross-time contrastive, spatio-temporal masking, decoder serialization

In Rescan, the arrangement optimization step maximizes a global objective:

Ai=argmaxAiO(Si,Ai,A)=wcOc+wgOg+wiOr+whOhA_i^* = \arg\max_{A_i'} O(S_i, A_i', \mathcal{A}) = w_c O_c + w_g O_g + w_i O_r + w_h O_h

with terms for coverage, geometry fit (ICP), intersection penalty, and hysteresis (temporal smoothness), tuned by grid search. Optimization proceeds via greedy warm-start and simulated annealing. Labels are transferred by nearest-neighbor assignment followed by multi-label graph-cut smoothing (Halber et al., 2019).

ReScene4D processes all temporal features with a mask-transformer decoder, equipped with cross-time strategies to promote consistent instance clustering and identity:

  • Cross-time contrastive loss aligns superpoint embeddings across scans,
  • Spatio-temporal (ST) masking pools mask predictions to allow queries to attend to temporally related regions,
  • Decoder serialization patterns encourage queries to learn both spatial and temporal context mixing (Steiner et al., 16 Jan 2026).

4. Temporal Association and Metrics

A key challenge in 4D SIS is robustly evaluating temporal association and instance tracking. The temporal mAP (t-mAP) metric, introduced by ReScene4D, extends standard mean average precision to explicitly reward temporal identity consistency. For each predicted and ground-truth trajectory, it computes their union over time and minimum temporal IoU:

t-IoU(pri,gti)=mint:(pi,gi) existIoU(pi(c,t),gti(c,t))\mathrm{t\text{-}IoU}(pr_i, gt_i) = \min_{t: (p_i, g_i) \text{ exist}} \mathrm{IoU}(p_i(c, t), gt_i(c, t))

Ambiguities (e.g., identical objects switching locations) are resolved through a greedy assignment scheme maximizing the sum of weighted per-time overlaps, accommodating missing or spurious predictions. The result is a metric sensitive to mistaken identity switches or fragmented trajectories, unlike per-frame mAP or instance IoU (Steiner et al., 16 Jan 2026).

5. Experimental Evaluation and Results

Experiments are conducted on the 3RScan dataset (478 scenes, length-2 scan sequences for T=2T=2), with manually annotated, temporally associated ground-truth instance labels. Baselines include per-frame SIS (e.g., Mask3D), heuristic temporal matching, and prior 4DSIS attempts (Mask4D, Mask4Former).

Quantitative results demonstrate substantial performance gains using temporally aware models:

Method t-mAP mAP
Mask4D 1.3 2.1
Mask3D+sem matching 20.1 25.9
Mask3D+geo matching 20.7 29.7
ReScene4D (Concerto) 34.8 43.3
Rescan

In the Rescan benchmark (different dataset), instance transfer IoU rises to 0.65, far exceeding fine-tuned 3DSIS (0.345), with per-point semantic IoU of 0.859 and instance IoU of 0.837 (Halber et al., 2019, Steiner et al., 16 Jan 2026).

Ablation studies confirm the necessity of both spatial and temporal context:

  • Coverage is indispensable in Rescan; removing it collapses all metrics.
  • Hysteresis (temporal smoothness) is critical for instance transfer in Rescan.
  • In ReScene4D, contrastive loss and spatio-temporal serialization each improve t-mAP by up to 6 points; their combination is additive.

6. Limitations and Open Directions

The primary limitations include:

  • Dataset: 3RScan has limited semantic diversity and few dynamically changing objects (only 17% involve temporal changes).
  • Temporal window: memory and architectural constraints limit sequence length to T=2T=2 in ReScene4D, preventing evaluation of longer subtleties in persistence and occlusion.
  • Annotation: inconsistencies in ground truth hinder precise assessment of noise-robustness.
  • Architectural scope: state-of-the-art methods do not yet integrate advanced query refinement architectures (e.g., QueryFormer) or tune backbone–decoder connectivity for optimal temporal modeling (Steiner et al., 16 Jan 2026).

Future research aims to:

  • Curate larger, richly dynamic 4D indoor datasets with higher proportions of changing instances,
  • Extend end-to-end temporal window to T>2T > 2,
  • Combine object-centric and transformer architectures,
  • Integrate learned motion priors or explicit temporal dynamics.

7. Significance and Applications

Temporally sparse 4D SIS addresses a central challenge in long-term deployment of depth-sensing applications such as robotics, AR/VR, and persistent mapping: robustly mapping and tracking semantic entities over extended periods of dynamic evolution without requiring frequent scanning. By jointly leveraging multi-temporal context, spatial structure, and explicit object models, modern approaches surpass both classical per-frame SIS and dense 4DSIS methods reliant on artificial temporal continuity. The results establish a foundation for persistent semantic mapping and robust instance-level tracking in real-world indoor environments subject to unconstrained change (Halber et al., 2019, Steiner et al., 16 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Temporally Sparse 4D Indoor Semantic Instance Segmentation (SIS).