Analyzing the Error of Generative Diffusion Models: From Euler-Maruyama to Higher-Order Schemes
Abstract: Although generative diffusion models (GDMs) are widely used in practice, their theoretical foundations remain limited, especially concerning the impact of different discretization schemes applied to the underlying stochastic differential equation (SDE). Existing convergence analysis largely focuses on Euler-Maruyama (EM)-like methods and does not extend to higher-order schemes, which are naturally expected to provide improved discretization accuracy. In this paper, we establish asymptotic 2-Wasserstein convergence results for SDE-based discretization methods employed in sampling for GDMs. We provide an all-at-once error bound analysis of the EM method that accounts for all error sources and establish convergence under all prevalent score-matching error assumptions in the literature, assuming a strongly log-concave data distribution. Moreover, we present the first error bound result for arbitrary higher-order SDE-discretization methods with known strong L_2 convergence in dependence on the discretization grid and the score-matching error. Finally, we complement our theoretical findings with an extensive numerical study, providing comprehensive experimental evidence and showing that, contrary to popular believe, higher order discretization methods can in fact retain their theoretical advantage over EM for sampling GDMs.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.