Broader noise schedules and practical discretizations
Extend the geometric–distribution separation guarantees to broader noise schedules and practically used discrete samplers by establishing results that account for finite-step implementations, step-size selection, truncated time horizons, and non‑uniform time weighting while preserving the separation between geometry learning and distribution learning.
References
Several directions remain open: Noise schedules, discretizations, and training idealizations. It would be valuable to extend the theory to broader noise schedules and practically used discretizations, including the effects of finite-step samplers, step-size selection, and common training variations (e.g., truncated time horizons or non-uniform time weighting), while preserving a comparable separation between geometry learning and distribution learning.