Parametric score models and realizability/approximation under coarse optimization

Develop a fully parametric class of score networks for the small-noise regime that explicitly encodes projection-like geometry (e.g., via physics‑informed architectures) and prove realizability and approximation guarantees for such models under local denoising score matching with coarse optimization.

Background

The core analysis uses a largely nonparametric function-class specification to capture the eikonal/projection structure that dominates small-noise scores. The authors posit that making this inductive bias explicit in a parametric architecture and proving guarantees under coarse optimization would be an important next step.

This direction aims to better align theoretical assumptions with practical neural-network implementations while maintaining the geometric focus that underpins their results.

References

Several directions remain open: From nonparametric function classes to explicit parametrizations. An important next step is to make this inductive bias explicit by working with a fully parametric score model and proving realizability/approximation guarantees under coarse optimization—for instance, via physics-informed architectures (PINNs) or other structured networks that directly encode projection-like behavior.

Manifold Generalization Provably Proceeds Memorization in Diffusion Models  (2603.23792 - Shen et al., 24 Mar 2026) in Conclusion, Open directions (1)