Papers
Topics
Authors
Recent
Search
2000 character limit reached

One-step Diffusion Models with $f$-Divergence Distribution Matching

Published 21 Feb 2025 in cs.LG, cs.AI, and cs.CV | (2502.15681v2)

Abstract: Sampling from diffusion models involves a slow iterative process that hinders their practical deployment, especially for interactive applications. To accelerate generation speed, recent approaches distill a multi-step diffusion model into a single-step student generator via variational score distillation, which matches the distribution of samples generated by the student to the teacher's distribution. However, these approaches use the reverse Kullback-Leibler (KL) divergence for distribution matching which is known to be mode seeking. In this paper, we generalize the distribution matching approach using a novel $f$-divergence minimization framework, termed $f$-distill, that covers different divergences with different trade-offs in terms of mode coverage and training variance. We derive the gradient of the $f$-divergence between the teacher and student distributions and show that it is expressed as the product of their score differences and a weighting function determined by their density ratio. This weighting function naturally emphasizes samples with higher density in the teacher distribution, when using a less mode-seeking divergence. We observe that the popular variational score distillation approach using the reverse-KL divergence is a special case within our framework. Empirically, we demonstrate that alternative $f$-divergences, such as forward-KL and Jensen-Shannon divergences, outperform the current best variational score distillation methods across image generation tasks. In particular, when using Jensen-Shannon divergence, $f$-distill achieves current state-of-the-art one-step generation performance on ImageNet64 and zero-shot text-to-image generation on MS-COCO. Project page: https://research.nvidia.com/labs/genair/f-distill

Summary

  • The paper introduces $f$-distill, a generalized framework that minimizes $f$-divergence between teacher and student distributions to achieve one-step diffusion model sampling.
  • The $f$-distill framework's gradient is formulated based on the score difference and a weighting function, generalizing previous methods and enabling exploration of divergences like JS.
  • Experiments show that $f$-distill using JS divergence achieves state-of-the-art one-step generation on datasets like ImageNet-64 and zero-shot MS-COCO.

Here is an executive summary of the paper, as you requested:

This paper introduces ff-distill, a generalized distillation framework that minimizes the ff-divergence between the teacher and student distributions to accelerate diffusion model sampling to a single step.

  • The framework's gradient is expressed as the product of the score difference between the teacher and student models and a weighting function determined by the density ratio and the chosen ff-divergence.
  • ff-distill encompasses existing variational score distillation methods as a special case and allows for the exploration of less mode-seeking divergences such as forward-KL and Jensen-Shannon (JS).
  • Experiments demonstrate that ff-distill with JS divergence achieves state-of-the-art one-step generation performance on ImageNet-64 and zero-shot MS-COCO, highlighting the benefits of balancing mode coverage and gradient variance.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 7 tweets with 404 likes about this paper.