Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multi-view Surface Reconstruction Using Normal and Reflectance Cues

Published 4 Jun 2025 in cs.CV | (2506.04115v1)

Abstract: Achieving high-fidelity 3D surface reconstruction while preserving fine details remains challenging, especially in the presence of materials with complex reflectance properties and without a dense-view setup. In this paper, we introduce a versatile framework that incorporates multi-view normal and optionally reflectance maps into radiance-based surface reconstruction. Our approach employs a pixel-wise joint re-parametrization of reflectance and surface normals, representing them as a vector of radiances under simulated, varying illumination. This formulation enables seamless incorporation into standard surface reconstruction pipelines, such as traditional multi-view stereo (MVS) frameworks or modern neural volume rendering (NVR) ones. Combined with the latter, our approach achieves state-of-the-art performance on multi-view photometric stereo (MVPS) benchmark datasets, including DiLiGenT-MV, LUCES-MV and Skoltech3D. In particular, our method excels in reconstructing fine-grained details and handling challenging visibility conditions. The present paper is an extended version of the earlier conference paper by Brument et al. (in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024), featuring an accelerated and more robust algorithm as well as a broader empirical evaluation. The code and data relative to this article is available at https://github.com/RobinBruneau/RNb-NeuS2.

Summary

  • The paper introduces a two-stage method that integrates multi-view normals and reflectance cues via radiance re-parametrization for improved 3D surface reconstruction.
  • It unifies photometric stereo inputs with neural implicit surface methods, achieving faster performance and robust reconstructions even with significantly fewer views.
  • Empirical results on standard datasets show marked improvements in Chamfer distance and angular error, underscoring its practical potential in complex 3D scanning applications.

Multi-view Surface Reconstruction Using Normal and Reflectance Cues

The paper "Multi-view Surface Reconstruction Using Normal and Reflectance Cues" presents a comprehensive framework for enhancing 3D surface reconstruction, particularly in scenarios involving complex materials and limited viewpoints. It introduces a method that integrates multi-view normal and reflectance maps with radiance-based surface reconstruction, leveraging their inherent geometric and photometric cues. The significance of this approach lies in its ability to improve state-of-the-art performance not only in capturing intricate details but also in addressing challenging visibility and reflectance conditions.

Methodology

This research proposes a novel two-stage approach that leverages input from photometric stereo (PS) methods. Initially, per-view normal and reflectance maps are estimated using state-of-the-art PS techniques, such as SDM-UniPS and UniMS-PS. These inputs are re-parametrized into radiance vectors using a physically-based rendering model that incorporates varied illumination. This re-parametrization transforms heterogeneous surface information into homogeneous radiance vectors, thereby simplifying their integration in standard surface reconstruction processes like multi-view stereo (MVS) or neural volume rendering (NVR).

A key feature of this framework is the unification of surface normals and reflectance as radiance values, enabling a single-objective optimization for surface reconstruction. Here, the paper builds upon existing neural implicit surface methods, adapting the NeuS2 framework to incorporate the radiance-based optimization strategy. This results in a speedup by leveraging CUDA optimization and a hash grid for surface-adjacent focus during volumetric rendering.

Strong Numerical Results

The empirical validation shows that the proposed method achieves substantial improvements across benchmarks. Experiments conducted on datasets such as DiLiGenT-MV and LUCES-MV reveal significant accuracy in Chamfer distance and normal mean angular error, particularly in fine-detail capture and sparse-view scenarios. For instance, the framework demonstrates robustness even when the number of views is reduced by 75%, markedly outperforming existing MVS and MVPS solutions such as SuperNormal and NeuS2.

The paper also emphasizes the versatility of its method across different acquisition setups, illustrated by experiments on Skoltech3D. Despite challenging lighting conditions leading to over-exposure, the proposed approach consistently achieves better results than baselines, primarily through its robust handling of reflectance and geometric variations.

Implications and Future Outlook

This research has direct implications for fields requiring high-fidelity 3D reconstruction in complex environments, such as cultural heritage preservation, digital content creation, and medical imaging. The framework offers practical promise for affordable, accurate 3D scanning in semi-controlled environments, leveraging multi-light setups and advanced PS techniques to realize detailed, robust reconstructions.

The findings point to the practical advantages of MVPS methods over traditional MVS approaches. Especially in capturing fine geometric details and dealing with complex reflectance scenarios, MVPS has the potential to redefine methodologies for digital 3D acquisitions. However, challenges remain, notably in ground truth dataset quality, which currently limits the reliability of MVPS assessments and could improve through enhanced sensor and calibration methodologies.

Future research could focus on improving robustness against noisy inputs and the accuracy of pre-processing stages like normal estimation. Scaling this framework to unstructured illumination setups or extending the physically-based models for reflectance may further widen its applicability. This exciting intersection between photometric stereo advancements and multi-view normal integration is poised to drive future developments in 3D surface reconstruction.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

GitHub

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.