Stochastic Interpolant Transports
- Stochastic interpolant transports are a framework that builds continuous-time stochastic processes to bridge prescribed source and target distributions.
- They employ parameterized drift and diffusion, using regression and score matching to efficiently simulate and learn complex distribution transitions.
- The approach offers strong regularity and contractivity, extends to manifold and discrete settings, and unifies deterministic flows with diffusion models.
Stochastic interpolant transports, also known as stochastic interpolant flows or random bridges, are a class of mathematical and algorithmic frameworks for constructing continuous-time stochastic processes that interpolate between prescribed source and target probability distributions, typically over a finite time interval. These transports generalize classical deterministic transport and modern generative modeling techniques, including normalizing flows, diffusion models, and Schrödinger bridges, providing a unifying perspective for deterministic and stochastic dynamical systems linking arbitrary distributions (Goria et al., 16 Dec 2025, Albergo et al., 2023, Ma et al., 2024).
1. Mathematical Foundations of Stochastic Interpolant Transports
Stochastic interpolant transports are defined by specifying a stochastic process such that and for given source and target measures. The core construction begins by introducing a family of time-indexed random variables built from a coupling (independent or data-dependent) of and , possibly together with auxiliary latent variables (Albergo et al., 2023, Albergo et al., 2023).
A canonical example is the linear stochastic interpolant,
where is independent noise and are schedule functions satisfying boundary conditions (e.g., , , , etc.) (Albergo et al., 2023).
The law of is a time-marginal interpolation between and with explicit density evolution analytically available in the case of Gaussian couplings and certain schedule choices (Goria et al., 16 Dec 2025, George et al., 1 Feb 2025). Unlike optimal transport which relies on deterministic maps, stochastic interpolants construct bridges—non-Markovian or Markovian stochastic processes conditioned to hit prescribed marginals at endpoints.
This framework covers both deterministic flows and diffusions:
- Probability-flow ODE: (samples follow deterministic paths from to ).
- Stochastic SDE: (samples evolve under both drift and noise) (Albergo et al., 2023, Goria et al., 16 Dec 2025).
The key mathematical property is that the time-dependent density of the process satisfies both the Kolmogorov (continuity) equation
and, for SDE transports, the forward/backward Fokker-Planck equations allowing for nonzero noise (Goria et al., 16 Dec 2025, Albergo et al., 2023, George et al., 1 Feb 2025).
2. Representation, Learning, and Simulation
To operationalize stochastic interpolant transports in generative modeling, the drift field and sometimes the diffusion coefficient are parameterized, typically using neural networks (Ma et al., 2024, Albergo et al., 2023, Wu et al., 22 Apr 2025).
The process can be learned by minimizing regression objectives:
- Conditional mean matching: is estimated by minimizing squared error against observed increments, i.e., (Goria et al., 16 Dec 2025, Ma et al., 2024, Albergo et al., 2023).
- Score matching: The score is learned via quadratic losses, with denoising score objectives equivalent to conditional mean estimation in Gaussian settings (Albergo et al., 2023).
Simulation is performed by numerically integrating the learned ODE/SDE using Euler–Maruyama or higher-order solvers, often requiring far fewer steps (e.g., $10-100$) than standard diffusion models (which can require steps) while maintaining competitive or superior sample quality (assessed by FID and other metrics) (Goria et al., 16 Dec 2025, Ma et al., 2024).
A frequent design is to use affine schedules (e.g., linear , , trigonometric, or interpolants from Schrödinger bridge theory) and select coupling strategies to minimize transport curvature and improve efficiency (Goria et al., 16 Dec 2025, Daniels, 14 Apr 2025, Albergo et al., 2023).
3. Theoretical Properties: Contractivity, Bias, and Regularity
Stochastic interpolant flows exhibit strong regularity and contractivity properties under log-concavity or convexity assumptions on the endpoints. In the setting (Gaussian base) and strongly log-concave, the interpolant flow map is Lipschitz with a sharp constant matching the Caffarelli contraction theorem for optimal transport maps (Daniels, 14 Apr 2025).
General non-Gaussian endpoints admit contractive flows provided uniform spectral bounds on the Hessians of the potentials, affording dimension-free quantitative regularity of the transport maps. This governs estimator complexity, discretization error, and Monte Carlo concentration (Daniels, 14 Apr 2025).
KL-divergence and Wasserstein error between the target and the generated distribution under the interpolant flow can be tightly controlled by training losses on and , and explicit bounds are available for both Euclidean and Riemannian settings (Wu et al., 22 Apr 2025, Albergo et al., 2023).
A notable geometric insight is the “straightness” criterion: if the induced conditional mean ODE has zero pointwise acceleration, then the flow is exactly a straight line and admits exact integration in a single step—a property uniquely characterized in (Tsimpos et al., 13 Oct 2025).
4. Extensions: Manifolds, Conditioning, and Control
Recent research has extended stochastic interpolant transports to manifold-valued settings. The Riemannian Neural Geodesic Interpolant (RNGI) constructs interpolant flows on geodesically complete manifolds by leveraging the exponential and logarithm maps, with the associated velocity and score fields parameterized in the tangent bundle and trained by analogous regression losses. The E-SDE sampling algorithm, using embeddings and orthogonal projections in Euclidean space, achieves significantly improved bias and convergence properties on manifolds such as and compared to previous geodesic random walk approaches (Wu et al., 22 Apr 2025).
Data-dependent couplings further allow conditional or joint generation tasks by correlating the base and target variables to ensure flows better follow conditional structure (as in super-resolution or inpainting). This reduces interpolant curvature, improves training sample efficiency, and enables optimal integration of conditioning signals (Albergo et al., 2023).
Control-theoretic interpretations are also natural: reward-tilted terminal distributions and optimal-control perspectives (as in the Tilt Matching framework) reveal that the velocity field under reward-tilting is the unique minimizer of a quadratic-actuated optimal control problem, solved efficiently through covariance ODEs or cumulant expansions, without requiring trajectory-level backpropagation or reward gradients (Potaptchik et al., 26 Dec 2025).
5. Algorithms and Empirical Performance
Stochastic interpolant transports are algorithmically instantiated through a sequence of steps:
- Define the interpolant structure (interpolation schedules, endpoint coupling).
- Parameterize the drift and optionally score by neural nets (MLPs or CNNs with appropriate time embedding).
- Train by sample-based regression: draw , form via the interpolant, and minimize relevant losses (conditional mean or denoising).
- Simulate the learned transport ODE/SDE to generate from .
Sampling procedures can be deterministic (probability-flow ODE, e.g., Heun’s or RK methods) or stochastic (Euler–Maruyama for SDEs), with diffusion coefficients and discretization schedule decoupled to optimize sample efficiency (Ma et al., 2024, Goria et al., 16 Dec 2025).
Empirically, stochastic interpolant bridges such as Gaussian random bridges attain high-quality FID in generative modeling tasks (MNIST, CIFAR-10, ImageNet) using $10$–$100$ steps, compared to $1000+$ steps required by DDPMs; similar improvements are observed in sampling efficiency for high-dimensional distributions where classical MCMC methods struggle (Goria et al., 16 Dec 2025, George et al., 1 Feb 2025).
The table below summarizes key empirical findings:
| Model | Steps (m) | FID (MNIST) | FID (CIFAR-10) |
|---|---|---|---|
| Bridge (Gaussian) | 10 | 20 | (See Table B) |
| DDPM | 10 | 137 | |
| DDPM | 1000 | <4 |
Additional sample quality and computational trade-offs are reported in (Goria et al., 16 Dec 2025, Ma et al., 2024).
6. Special Cases, Limit Behaviors, and Discrete Structures
Stochastic interpolants reproduce and generalize many classical and contemporary constructs:
- Schrödinger bridges: obtained via entropy-regularized variants or directly as limits of interpolant optimization (Albergo et al., 2023, Ciccone et al., 2020).
- Optimal transport: deterministic interpolants under deterministic couplings yield Monge maps; stochastic interpolants with correlated noise recover entropic interpolations (Tsimpos et al., 13 Oct 2025, Léonard, 2013).
- Discrete settings: On graphs, displacement interpolations arise as the low-temperature limit of lazy random walks and discrete Schrödinger problems, with -convergence to optimal transport on graphs (Léonard, 2013).
- Backward interpolation: The Itô–Ventzell/Alekseev–Gröbner stochastic interpolation formulae provide explicit interpolation formulas between two SDE flows, including in anticipative settings (Moral et al., 2019).
7. Unifying Perspective and Future Directions
Stochastic interpolant transports unify flow-based, diffusion-based, and control-based generative frameworks. They offer a modular and tunable approach to interpolating between distributions, supporting deterministic and stochastic evolution, explicit density calculations, likelihood estimation, and critical regularity guarantees. Their recent extensions to Riemannian manifolds, high-dimensional sampling, reward-based control problems, and graph-structured data further expand their scope (Wu et al., 22 Apr 2025, George et al., 1 Feb 2025, Potaptchik et al., 26 Dec 2025, Léonard, 2013).
Ongoing research addresses open problems in importance-weighted corrections, hyperparameter schedule optimization, scaling to high-dimensional structured data, and the design of interpolants which admit exact or low-curvature ODE flows. Stochastic interpolant transports stand as a central paradigm in contemporary generative modeling and probabilistic representation learning.