Papers
Topics
Authors
Recent
Search
2000 character limit reached

Explicit Per-Splat Texture Mapping

Updated 27 January 2026
  • Explicit per-splat texture mapping is a technique that maps spatially varying 2D textures to individual Gaussian splats, decoupling geometric and appearance complexity.
  • It employs methods like per-splat texture attachments, global UV atlas mapping, and adaptive sampling to optimize rendering speed and editability.
  • Key results include enhanced detail reproduction, improved memory efficiency, and real-time processing for applications in dense scene modeling and photorealistic reconstruction.

Explicit per-splat texture mapping refers to the attachment and mapping of a spatially varying appearance function—typically represented as a 2D texture or low-dimensional field—to each Gaussian primitive ("splat") in a Gaussian Splatting or related radiance field representation. This approach allows for fine-grained, local appearance control and editability, decoupling geometric and appearance complexity at the primitive level. Contemporary methods deploy this mechanism in dense scene modeling, view synthesis, shape editing, and photorealistic reconstruction using both planar and volumetric splats (Xu et al., 2024, Younes et al., 16 Jun 2025, Papantonakis et al., 2 Dec 2025, Wei et al., 16 Dec 2025, Xie et al., 28 Nov 2025, Baert et al., 9 Dec 2025, Zhou et al., 11 Dec 2025, Lim et al., 2024).

1. Principles and Variants of Per-Splat Texture Mapping

Explicit per-splat texture mapping situates a compact, explicit appearance field in the local canonical space of each splat (e.g., a 2D texture patch, neural field, or parametric map). During rendering, texture values are sampled based on the ray–splat intersection or projection onto a mesh/UV atlas.

Major strategies vary in:

These distinctions dictate expressiveness, editability, computational efficiency, and suitability for different downstream applications.

2. Mathematical and Algorithmic Formulation

Per-splat texture mapping is formalized by specifying, for each splat ii, a geometric anchor (often a mean μi\mu_i and oriented local frame), a covariance structure Σi\Sigma_i, and a splat-local appearance field (e.g., Ti:[0,1]2R3T_i: [0,1]^2 \to \mathbb{R}^3 or Fi:R2R3F_i: \mathbb{R}^2 \to \mathbb{R}^3).

For a camera ray intersecting splat ii at a local canonical coordinate (u,v)(u, v), the splat's contribution is typically

ci(x,d)=SHi(d)+TextureSample(Ti,(u,v)),c_i(\mathbf{x}, \mathbf{d}) = \mathrm{SH}_i(\mathbf{d}) + \text{TextureSample}(T_i, (u, v)),

with opacity-weighted blending across splats via Gaussian spatial weights

wi(x)=αiexp(12pxyΣi1pxy),w_i(\mathbf{x}) = \alpha_i \exp\left(-\frac{1}{2} \mathbf{p}_{xy}^{\top} \Sigma_i^{-1} \mathbf{p}_{xy}\right),

followed by front-to-back alpha compositing (Papantonakis et al., 2 Dec 2025, Xu et al., 2024, Younes et al., 16 Jun 2025, Wei et al., 16 Dec 2025).

Key innovations include:

  • Taylor expansions for fast evaluation of learned UV mappings near Gaussian centers (Xu et al., 2024).
  • Analytic linear mapping from splat-local coordinates to mesh UV space using barycentric and Jacobian computations (Baert et al., 9 Dec 2025).
  • Learnable frequency-aware warps (ϕ\phi with Jacobian determinant controlling local texel density) to align texture sampling with appearance complexity (Xie et al., 28 Nov 2025).
  • Neural texture fields predicted by shared global features for memory efficiency and generalization (Wang et al., 24 Nov 2025).

3. Adaptive and Content-Aware Texturing

One core challenge is the allocation of texture capacity in spatially non-uniform scenes. Solutions include:

  • Adaptive Texel Sizing: Enforcing a minimum world-space texel size per splat to avoid aliasing, adaptively coarsening or refining textures based on photometric and low-pass reconstruction errors (Papantonakis et al., 2 Dec 2025).
  • Anisotropic and Content-Driven Growth: Growing texture resolution along axes with high error gradients to avoid over-parameterization in smooth regions (Wei et al., 16 Dec 2025).
  • Frequency-Aligned Texture Remapping: Employing spatial deformation fields to warp the sampling grid according to local color frequency, as measured by image gradients, ensuring high-detail regions receive denser sampling under a fixed parameter budget (Xie et al., 28 Nov 2025).

These mechanisms decouple geometric complexity (splat density) from appearance complexity (texture resolution), yielding models capable of reproducing sharp edges and dense patterns (e.g., printed text, foliage) using fewer splats and less overall storage.

4. Integration with Structured Surfaces and Global Atlases

Some approaches exploit existing mesh surface structure to globally parameterize per-splat textures:

  • UV Atlas Mapping: GTAvatar constructs an analytic mapping from each splat's local domain (s,t)(s,t) to a global mesh UV atlas, ensuring edits and relighting via universal 2D maps (Baert et al., 9 Dec 2025).
  • Mesh-Anchored Splatting: DeMapGS attaches splats to mesh faces via barycentric coordinates, enabling joint optimization of geometry, texture, and surface displacement; explicit UV maps can then be extracted by compositing splat contributions per-texel in the triangle-local parameterization (Zhou et al., 11 Dec 2025).
  • Tri-Plane or Global Neural Fields: Neural Texture Splatting predicts local texture fields from shared global tri-planes and small neural decoders, mixing per-splat fidelity with significant memory savings and efficient global feature sharing (Wang et al., 24 Nov 2025).

This structured mapping ensures continuity, eliminates seams, and allows for standard 2D texture editing or mesh-based effects unavailable in per-Gaussian-only schemes.

5. Optimization, Pipeline Architectures, and Losses

Optimization typically proceeds joint or staged over geometry and texture:

Models are rendered via forward rasterization or ray-marching, often with CUDA-level path optimizations and spatial acceleration data structures for candidate splat identification (Lim et al., 2024, Papantonakis et al., 2 Dec 2025, Wei et al., 16 Dec 2025).

6. Performance, Expressiveness, and Limitations

Extensive empirical benchmarks demonstrate the benefits of explicit per-splat texture mapping over alternatives:

Limitations include residual aliasing for out-of-training-view directions, restricted handling of highly view-dependent or specular phenomena unless using high-capacity fields, and, for some methods, the need for per-object hyperparameter tuning or complex gradient allocation schemes (Papantonakis et al., 2 Dec 2025, Wei et al., 16 Dec 2025).

7. Representative Methods and Comparative Summary

The following table summarizes representative explicit per-splat texture mapping strategies derived from recent literature:

Method / Paper Texture Parameterization Adaptivity / Allocation Key Application
Texture-GS (Xu et al., 2024) Global UV atlas, learned MLP Taylor expansion for efficient mapping Real-time editing
TextureSplat (Younes et al., 16 Jun 2025) Per-splat small textures Unified atlas, hardware filtering Reflective scenes, PBR
Content-Aware Texturing (Papantonakis et al., 2 Dec 2025) Per-splat, variable size Adaptive up/downscale, splitting Parameter efficiency
ASAP-Textured Gaussians (Wei et al., 16 Dec 2025) Per-splat, anisotropic tiles Gaussian-CDF warping, error-driven grow Memory-quality trade-off
FACT-GS (Xie et al., 28 Nov 2025) Per-splat, fixed grid + warp Frequency-aligned allocation (DF Jac.) High-frequency detail
Neural Texture Splatting (Wang et al., 24 Nov 2025) Neural tri-planes per splat Shared global field, CP-decomposition Expressive effects, 4D
GTAvatar (Baert et al., 9 Dec 2025) Analytic to mesh UV atlas UV-regularized for continuity/editing Editable relightable avatars
DeMapGS (Zhou et al., 11 Dec 2025) Mesh-anchored, per-face Attribute and geometry optimization Mesh extraction, editing
Explicit Per-Splat Projection (Lim et al., 2024) Splats→mesh, Gaussian-weight Uniform spatial grid, analytic blending Fast avatar texture transfer

These approaches collectively demonstrate that explicit per-splat texture mapping, when combined with modern acceleration and allocation strategies, provides a highly flexible, efficient, and editable framework for high-quality radiance field and surface reconstruction across static, dynamic, and relightable scenarios.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Explicit Per-Splat Texture Mapping.