Assigning per-frame photometric parameters for novel views

Determine a principled method to assign values for per-frame photometric compensation parameters to novel viewpoints in multi-view 3D reconstruction pipelines where such parameters are optimized independently per frame (for example, per-image GLO embeddings in NeRF-W, per-image affine color transforms in URF, or per-pixel bilateral-grid parameters in BilaRF), without relying on access to ground-truth novel view images.

Background

Many recent methods mitigate multi-view photometric inconsistencies by optimizing additional per-frame parameters, such as latent embeddings (GLO), affine color transforms, or bilateral grids. While these improve training-view fidelity, the parameters are learned independently for each input image and thus do not directly generalize to unobserved viewpoints. This creates a practical ambiguity at inference: how to set these appearance parameters for novel views without ground-truth target images.

The paper highlights that common evaluation protocols sidestep this ambiguity by aligning to ground-truth novel views (e.g., via affine color correction), which is unrealistic in practical deployment. The open issue is therefore a robust, physically grounded approach to predicting per-frame appearance parameters for unseen viewpoints that avoids reliance on target pixels.

References

Parameters for novel views: since the parameters are optimized independently per frame, it is unclear how to assign appropriate values when synthesizing novel views.

PPISP: Physically-Plausible Compensation and Control of Photometric Variations in Radiance Field Reconstruction  (2601.18336 - Deutsch et al., 26 Jan 2026) in Section 1 (Introduction), itemized challenges under mitigation strategies