Papers
Topics
Authors
Recent
Search
2000 character limit reached

Exponential Ergodicity of RLMC

Updated 19 November 2025
  • Exponential ergodicity of RLMC is defined as the uniform, exponential convergence of its chain to the invariant distribution under strong convexity and smoothness conditions.
  • The methodology leverages a randomized midpoint approach that reduces drift bias to O(h²), significantly enhancing convergence compared to traditional Langevin algorithms.
  • Key theoretical results include explicit contraction rates derived via geometric drift and minorization, offering actionable guidelines for step-size selection and algorithm robustness.

Exponential ergodicity of RLMC refers to the property that the randomized midpoint Langevin Monte Carlo (RLMC) chain, under suitable conditions, converges to its invariant distribution at an exponential rate, uniformly over initial states. This article provides a comprehensive account of definitions, theoretical guarantees, methodologies, explicit convergence rates, and their implications within the wider Markov process literature.

1. Definitions and Theoretical Framework

Exponential ergodicity denotes the existence of constants C<C<\infty and ρ(0,1)\rho\in(0,1) such that, for a Markov chain (Xk)k0(X_k)_{k\ge0} with invariant distribution π\pi, the distance between the nn-step law Pn(x,)P^n(x,\cdot) from any initial xx and π\pi in a suitable norm decays as CρnC\rho^n. For RLMC, the Markov chain is constructed as follows:

Given U:RdRU:\mathbb{R}^d\to\mathbb{R} twice continuously differentiable (with parameters ρ(0,1)\rho\in(0,1)0 such that ρ(0,1)\rho\in(0,1)1 for all ρ(0,1)\rho\in(0,1)2), the chain iterates by first drawing ρ(0,1)\rho\in(0,1)3 and independent Gaussians ρ(0,1)\rho\in(0,1)4, computing the "midpoint"

ρ(0,1)\rho\in(0,1)5

and then setting

ρ(0,1)\rho\in(0,1)6

The corresponding kernel ρ(0,1)\rho\in(0,1)7 defines a time-homogeneous Markov chain with generator ρ(0,1)\rho\in(0,1)8, which approximates the overdamped Langevin diffusion's generator up to ρ(0,1)\rho\in(0,1)9 error (Li et al., 17 Nov 2025).

2. Main Exponential Ergodicity Results

The main theorem established for RLMC asserts that, under the strong convexity and smoothness conditions on (Xk)k0(X_k)_{k\ge0}0 and for step-size (Xk)k0(X_k)_{k\ge0}1, the RLMC chain is exponentially ergodic. There exist explicit constants (Xk)k0(X_k)_{k\ge0}2, (Xk)k0(X_k)_{k\ge0}3, and Lyapunov function (Xk)k0(X_k)_{k\ge0}4 such that

(Xk)k0(X_k)_{k\ge0}5

or equivalently for (Xk)k0(X_k)_{k\ge0}6,

(Xk)k0(X_k)_{k\ge0}7

where (Xk)k0(X_k)_{k\ge0}8 denotes the weighted total variation (V-norm) distance,

(Xk)k0(X_k)_{k\ge0}9

The explicit contraction rate is π\pi0, and the prefactor π\pi1 is given in terms of drift and minorization constants (Li et al., 17 Nov 2025).

3. Proof Methodology: Drift and Minorization

The exponential ergodicity proof has three main components:

  • Geometric Drift Condition: For π\pi2, the RLMC kernel satisfies π\pi3 with π\pi4 (for small π\pi5) and explicit π\pi6, ensuring the process contracts towards a compact set in expectation.
  • Minorization (Small-Set Condition): Every ball π\pi7 is π\pi8-small for π\pi9, allowing a uniform lower density on subsets of positive measure.
  • Meyn–Tweedie Theorem: Combining drift and minorization yields the existence and uniqueness of an invariant law, with exponential convergence in weighted total variation (Li et al., 17 Nov 2025).

The midpoint in RLMC reduces the discretization bias in the drift to nn0, yielding faster mixing compared to the unadjusted Langevin algorithm (ULA), where the drift bias is only nn1 (Li et al., 17 Nov 2025).

4. Explicit Rates and Step Size Constraints

The key contraction parameter

nn2

yields the spectral gap of the Markov kernel: nn3 for nn4. The prefactor

nn5

is determined by the Lyapunov function and small set constants.

For the time-randomized skeleton (continuous-time RLMC), analogous spectral gap arguments give exponential ergodicity with the continuous-time rate matching the underlying diffusion, i.e., contraction at rate nn6 under nn7-strong convexity of nn8 (Mao et al., 2021).

5. Relationship to General Ergodicity and Markov Process Theory

RLMC inherits its ergodicity properties from general Markov semigroup theory:

  • For reversible chains, the exponential nn9 ergodicity rate equals the spectral gap Pn(x,)P^n(x,\cdot)0 (Mao et al., 2021, Guo et al., 2020).
  • Uniform ergodicity in total variation is guaranteed provided a geometric drift-minorization pair holds (Li et al., 17 Nov 2025), and the actual convergence rate Pn(x,)P^n(x,\cdot)1 can be equated to the exponential rate Pn(x,)P^n(x,\cdot)2 under tight hitting-time control for small sets (Mao et al., 2021).
  • In the setting of functional ergodicity (e.g., Pn(x,)P^n(x,\cdot)3-norms), the spectral gap criterion is necessary and sufficient for exponential ergodicity when the process is reversible; the same holds as a sufficient condition in the non-reversible case (Guo et al., 2020).

Summary of subordinate results:

Property Condition Rate Description
Geometric ergodicity Pn(x,)P^n(x,\cdot)4-strong convexity, Pn(x,)P^n(x,\cdot)5-smoothness, Pn(x,)P^n(x,\cdot)6 small Pn(x,)P^n(x,\cdot)7
Uniform total variation Small-set (minorization), drift Pn(x,)P^n(x,\cdot)8
Pn(x,)P^n(x,\cdot)9-spectral gap Reversible, Poincaré inequality xx0
Weighted xx1-ergodicity xx2, spectral gap positive xx3

The RLMC thus achieves the optimal exponential rate allowed by the diffusion process being discretized.

6. RLMC in Broader Algorithmic Contexts and Infinite Dimensions

RLMC-type models are linked to infinite-dimensional OU processes with cylindrical Lévy noise, where exponential ergodicity has also been established under spectral gap and lower-bound conditions on the noise (Wang, 2015). The RLMC methodology integrates into the general framework by verifying the geometric drift and small-set criteria, which translates to exponential mixing in total variation with explicit rates.

If the generator admits a uniform spectral gap and the noise is sufficiently rich (in the sense of the lower Bernstein function bound), ergodicity extends to infinite-dimensional or randomized linear Markov chain settings (Wang, 2015).

7. Metrics, Extensions, and Applications

Convergence analysis for RLMC is typically presented in weighted total variation, xx4, relative entropy, or xx5-norms. The explicit rates established for the RLMC algorithm give not only theoretical mixing guarantees but inform practical step size selection and algorithmic robustness, especially for strongly convex log-concave targets.

The RLMC framework also provides a foundation for understanding ergodicity in perturbed and reflected variants of Langevin dynamics as encountered in stochastic sampling, statistical physics, and high-dimensional Bayesian inference.


References:

  • (Li et al., 17 Nov 2025) "Convergence rate of randomized midpoint Langevin Monte Carlo"
  • (Mao et al., 2021) "Convergence Rates in Uniform Ergodicity by Hitting Times and xx6-exponential Convergence Rates"
  • (Guo et al., 2020) "Estimate the exponential convergence rate of f-ergodicity via spectral gap"
  • (Wang, 2015) "Linear Evolution Equations with Cylindrical Lévy Noise: Gradient Estimates and Exponential Ergodicity"

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Exponential Ergodicity of RLMC.