Mirror Langevin Diffusion: Theory & Applications
- Mirror Langevin Diffusion is a framework that uses a Legendre-type mirror map to define a state-dependent Riemannian geometry for efficient sampling over constrained domains.
- It incorporates inverse Hessian-driven drift and diffusion to ensure boundary adherence without explicit projections and to achieve improved convergence rates.
- Discretizations like MLA, MAMLA, and BMUMLA provide practical algorithms for both smooth and nonsmooth targets via Bregman divergence and self-concordant barrier properties.
Mirror Langevin Diffusion generalizes classical Langevin dynamics by replacing Euclidean geometry with a Riemannian geometry determined by a Legendre-type mirror map. This methodology is central for efficient sampling and optimization over constrained or non-Euclidean domains, underpinning recent advances in theory and practice for both mean-field and particle-based dynamics. The mirror Langevin framework incorporates state-dependent drift and diffusion coefficients linked to the inverse Hessian of the mirror potential, ensuring boundary adherence without explicit projections and yielding improved convergence rates under mild convexity and regularity conditions.
1. Mathematical Foundation and Mirror Geometry
Let be a convex domain and a Legendre-type (i.e., , strictly convex, as , ) barrier function. The mirror map is given by , with the inverse , where is the convex conjugate of . The associated Riemannian metric is , providing a space-dependent preconditioning both in drift and diffusion terms. The Bregman divergence captures the geometry induced by .
Classical Langevin diffusion on corresponds to . In the mirror formulation, one replaces this quadratic potential by adapted to domain constraints or problem structure (e.g., entropy for the simplex, log-barrier for polytopes).
2. Mirror Langevin Stochastic Differential Equations
In continuous time, the mirror Langevin diffusion for target density evolves as
with . The diffusion is equivalently formulated in dual coordinates as
For mean-field or interacting particle systems (as in Mirror Mean-Field Langevin Dynamics, MMFLD), the drift depends on the first variation of a measure-functional (see (Gu et al., 5 May 2025)). In primal variables, this reads
where and is the regularization parameter.
Self-concordance or barrier blow-up properties ensure that diverges at the boundary, so the process remains strictly inside almost surely, obviating the need for projection or reflection schemes (Chok et al., 6 Oct 2025).
3. Discretizations and Practical Algorithms
The foundational discretization is the Mirror Langevin Algorithm (MLA), arising from an Euler–Maruyama scheme in dual coordinates: with .
For mean-field systems, time and particle discretization leads to propagation of chaos and uniform-in-time convergence under suitable MLSI assumptions (Gu et al., 5 May 2025). In composite or nonsmooth settings, the Bregman–proximal Mirror Langevin Monte Carlo (BMUMLA) allows use of Bregman–Moreau envelopes for envelope smoothing and Bregman projections for handling indicator functions or nonsmooth convex constraints (Lau et al., 2022).
To eliminate step-size discretization bias, the Metropolis-adjusted Mirror Langevin Algorithm (MAMLA) incorporates a Metropolis–Hastings accept–reject step, yielding unbiased chains with mixing time under relative smoothness, convexity, and self-concordance conditions (Srinivasan et al., 2023). In polyhedral domains, the Dikin–Langevin process, driven by the log-barrier, admits a discretization corrected by MH that is equivalent to an interior-point Dikin random walk in the zero-drift case (Chok et al., 6 Oct 2025).
| Algorithm | Bias Order | Handles Constraints | Stationarity |
|---|---|---|---|
| MLA | (unadjusted) or (mean-square) (Li et al., 2021) | Yes | Approximate |
| MAMLA | Vanishing ($0$ as ) | Yes | Exact |
| BMUMLA | (from smoothing) (Lau et al., 2022) | Yes (Bregman prox) | Biased (controlled by ) |
4. Convergence Guarantees and Functional Inequalities
The ergodicity and contractivity of mirror Langevin dynamics hinge on mirror log-Sobolev (MLSI) and mirror-Poincaré inequalities, generalized to the geometry induced by . For instance, the entropy decay under MMFLD for convex and suitable MLSI (parameter ) is exponential: where is the steady-state (proximal Gibbs) measure (Gu et al., 5 May 2025). Analogous exponential ergodicity results hold for sampling and optimization, utilizing mirror-Wasserstein, Bregman, or Riemannian metrics (Chewi et al., 2020, Li et al., 2021, Zhang et al., 2020).
For systems with particle discretization, convergence bounds include an particle error and an time discretization error, both uniform-in-time under mirror LSI (Gu et al., 5 May 2025). In the unadjusted MLA, mean-square analysis gives global error in (Li et al., 2021).
5. Sampling over Constrained Domains
Mirror Langevin frameworks are particularly effective for sampling from constrained distributions, such as those supported on polytopes, simplices, or box domains. By selecting mirror potentials with boundary blow-up (e.g., log-barriers for polytopes, negative entropy for the simplex), the dynamics remain strictly feasible, and explicit reflections or projections are unnecessary. For polytopes, the Dikin–Langevin process uses the log-barrier Hessian to induce a mirror geometry tailored to face and vertex structure, ensuring no-flux at the boundary (Chok et al., 6 Oct 2025).
For uniform or constrained distributions, as for the uniform measure on a convex body, the mirror or Newton–Langevin approach yields dimension-free convergence rates by leveraging self-concordant barrier properties (Chewi et al., 2020). These rates often outperform projected or Moreau-Yosida ULA methods, avoiding their unfavorable dependence on ambient dimension or step-size (Ahn et al., 2020).
6. Variants and Extensions
- Mirror Mean-Field Langevin Dynamics (MMFLD): Extends MLD to nonlinear mean-field potential functionals , applicable to measure-valued dynamics and interacting particle systems, with particle discretizations inheriting uniform propagation of chaos and exponential entropy decay (Gu et al., 5 May 2025).
- Bregman Envelope and Proximal Steps: For nonsmooth composite targets, mirror–Langevin methods equipped with Bregman–Moreau envelopes and Bregman proximity operators enable efficient handling of indicator or penalty functions, generalizing MYULA and projected algorithms (Lau et al., 2022).
- Metropolized Dynamics: Incorporation of a Metropolis–Hastings accept–reject step yields unbiased stationary distributions and improved mixing-time scaling in error tolerance (cf. MAMLA) (Srinivasan et al., 2023).
- Interior-Point/Barrier-Langevin Unification: Dikin–Langevin diffusion forms the prototype of mirror Langevin using interior-point geometry, suitable for polyhedral or more general convex constraints (Chok et al., 6 Oct 2025).
7. Practical Considerations and Numerical Performance
- Implementation: Forming and inverting the Hessian per step yields an computational cost in general, but structure (e.g., diagonal for simplex/box) can be exploited.
- Feasibility: State-dependent drift and noise vanishing at the boundary guarantee trajectories cannot leave when using appropriate mirror maps (barrier blow-up).
- Discretization: Euler–Maruyama schemes in dual coordinates are widely used, with global bias (mean-square sense) or (unadjusted) (Li et al., 2021, Ahn et al., 2020).
- Mixing Times: Metropolized algorithms achieve geometric (logarithmic) dependence on TV- or KL-error, matching unconstrained MCMC in favorable regimes (Srinivasan et al., 2023).
- Numerical Experiments: Studies show superior mixing and statistical diagnostics for Dikin–Langevin on anisotropic or multi-modal domains compared to Euclidean or reflection-based samplers, with particularly sharp diagnostics in boundary- and corner-dominated regimes (Chok et al., 6 Oct 2025).
References
- Mirror Mean-Field Langevin Dynamics (Gu et al., 5 May 2025)
- Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin algorithm (Srinivasan et al., 2023)
- Exponential ergodicity of mirror-Langevin diffusions (Chewi et al., 2020)
- Bregman Proximal Langevin Monte Carlo via Bregman--Moreau Envelopes (Lau et al., 2022)
- The Mirror Langevin Algorithm Converges with Vanishing Bias (Li et al., 2021)
- Mirror Diffusion Models (Tae, 2023)
- Wasserstein Control of Mirror Langevin Monte Carlo (Zhang et al., 2020)
- Efficient constrained sampling via the mirror-Langevin algorithm (Ahn et al., 2020)
- Constrained Dikin-Langevin diffusion for polyhedra (Chok et al., 6 Oct 2025)