Weak Convergence Rate Bounds
- Weak Convergence Rate Bounds are quantitative estimates that measure how fast the expectation of test functionals converges in discretized stochastic and numerical systems.
- They are crucial in applications such as SDEs, SPDEs, inverse problems, and optimization, offering rigorous error control when strong convergence is unattainable.
- These bounds leverage analytical tools like Kolmogorov equations, weak Poincaré inequalities, and spectral regularization to yield precise rate estimates in diverse settings.
Weak convergence rate bounds provide explicit quantitative estimates on how fast sequences of measures, stochastic processes, operator semigroups, or iterates of optimization algorithms converge in law or against a prescribed class of test functions, typically as a function of algorithmic parameters, discretization granularity, or small parameters in multiscale dynamics. These bounds control the decay of expectation errors for functionals of the solution—not just pointwise trajectories—making them central in numerical analysis, stochastic process theory, and optimization, particularly when strong (pathwise) convergence is either unobtainable or too stringent. Weak rates are foundational for rigorous error analysis in PDEs, SDEs, SPDEs, inverse problems, adaptive MCMC, and statistical physics.
1. Fundamental Notions and Settings
Weak convergence rate bounds quantify the rate at which the expectation of a test functional under an approximate solution converges to its value under the exact solution , i.e.,
for all in a class , with as . The choice of (bounded, Lipschitz, ), and the metric (total variation, Wasserstein, bounded Lipschitz, or Hilbert space dual-pairing) defines the "weakness" of the convergence.
Key settings in which weak convergence rates have been rigorously characterized include:
- Time discretizations of SDEs and SPDEs, such as the Euler–Maruyama or Milstein schemes and Galerkin-based discretizations (Cai et al., 2019, Altmayer et al., 2016, Bayer et al., 2022)
- Numerical solutions to PDE-constrained inverse problems, especially under spectral regularization (Guastavino et al., 4 Dec 2025)
- Operator semigroup convergence for degenerate and/or non-reversible diffusions, using functional inequalities (Bertram et al., 2021, Grothaus et al., 2017)
- Central limit theorems for multiscale interacting particle systems or McKean–Vlasov SDEs (Xiang et al., 20 May 2025)
- Adaptive and non-adaptive MCMC convergence in weak topology (Brown et al., 2024)
- Optimization algorithms, where weak convergence often corresponds to convergence of objective residuals or dual variables (Li et al., 2023, Grimmer, 2021)
2. Weak Convergence Rate Bounds for Stochastic Differential Equations
Weak error bounds for SDE discretizations arise from expansions of the local error in expectation via the Kolmogorov backward equation, Malliavin calculus, or duality-based techniques. For example:
- In the log-Heston model, under smooth payoff and Feller index , the drift-implicit Milstein–Euler scheme achieves a weak rate
where is the maximal step size (Altmayer et al., 2016). The analysis uses Kolmogorov PDE techniques and a Malliavin calculus integration-by-parts to achieve leading term cancellation.
- For rough volatility models, the discretization error for smooth payoffs admits the lower bound
where is the Hurst index of the volatility process (Bayer et al., 2022). If the volatility functional is linear, sharper bounds are attainable.
For SPDEs such as the stochastic Allen–Cahn equation with additive noise, explicit full discretization (spectral Galerkin in space, tamed exponential Euler in time) yields weak convergence in both time and space with rate essentially double that of strong error: for the noise regularity index and the -th Laplacian eigenvalue (Cai et al., 2019).
3. Weak Convergence via Functional Inequalities and Operator Semigroups
Degenerate or non-uniformly elliptic diffusions often lack classical Poincaré inequalities, precluding exponential convergence in strong metrics. Instead, weak Poincaré inequalities yield explicit subgeometric rate bounds:
- For a -semigroup generated by (symmetric , anti-symmetric ), if weak Poincaré inequalities hold
(with Dirichlet form ), then
where is given by an explicit minimization involving and their rate of divergence as (Grothaus et al., 2017, Bertram et al., 2021).
- For polynomially confining potentials , decay can be stretched-exponential:
and with weakly confining (logarithmic) potentials, only polynomial or iterated-logarithmic rates are obtained. The regime is dictated by the tails of the invariant measure (Bertram et al., 2021).
4. Weak Convergence in Inverse Problems and Spectral Regularization
In inverse problems, weak convergence rates quantify the decay of the bias and variance of linear observation functionals under spectral regularization. Key insights include:
- For compact forward operators and noisy samples at points, spectral regularization (e.g., Tikhonov, cutoff) admits weak error bounds for test functions :
with the fill distance, kernel smoothness, and noise level. In trace-class settings, variance scaling improves and optimal weak rates are (Guastavino et al., 4 Dec 2025).
Sampling inequalities replace classical source conditions; thus, weak rates can be achieved independently of function smoothness assumptions, controlled solely by fill-distance/geometric considerations.
5. Weak Convergence in Stochastic Systems and Limit Theorems
Weak convergence rates also characterize fluctuation limits in multiscale stochastic systems:
- In two-scale McKean–Vlasov models, after rescaling the deviation process,
it is shown that for smooth test functions ,
where is the unique limit McKean–Vlasov SDE (Xiang et al., 20 May 2025). This employs Poisson equation techniques and a Kolmogorov–Cauchy duality.
In directed polymer models, the weak-disorder regime () leads to polynomial rates for the partition function martingale: with fluctuation scaling governed by a CLT with stable and mixing laws, in sharp contrast to the exponential rates seen on trees (Comets et al., 2016).
6. Weak Convergence Rates in Optimization and Sampling
In optimization and MCMC theory, "weak" convergence rates often pertain to Bregman divergences, Lyapunov function decay, or distances in Wasserstein/total variation metrics:
- For mirror descent with -strongly convex and -smooth objective , and likewise smooth and strongly convex, the Bregman divergence decays exponentially in continuous time:
and linearly in discrete time for optimal step size choices (Li et al., 2023).
- More generally, Grimmer's meta-theorems show that general (Hölder) growth bounds allow for a systematic derivation of weak convergence rates in first-order methods and enable new lower bounds in rates under additional growth conditions. This unifies and extends the landscape of possible rates in convex optimization (Grimmer, 2021).
For adaptive MCMC, subgeometric weak convergence lower and upper bounds are matched up to log factors when the adaptation schedule decays fast: where the rate is determined by the drift function and invariant measure tail ; upper bounds of the same order obtain under minorization and drift conditions (Brown et al., 2024).
In data-driven generative modeling, weak log-concavity suffices for 2-Wasserstein non-asymptotic error bounds for probability flow ODE samplers: with each term controlled in explicit dependence on step size , horizon , and other model parameters (Kremling et al., 20 Oct 2025).
7. Structural Features and Tightness of Weak Rate Bounds
The main determinants of the weak rate are:
- Regularity of the coefficients, functional class of test observables, and stochastic integrability properties.
- Geometric or analytic inequalities (Poincaré, weak-Poincaré, Log-Sobolev) controlling variance decay.
- Tail properties of the invariant measure or potential.
- The structure of the discretization scheme: explicit/implicit, projection methods, and adaptivity.
In most settings, weak convergence rates are strictly better than strong rates; for example, in many SPDEs, the weak rate is double the strong rate (Cai et al., 2019). However, in the absence of regularity or under roughness or degeneracy (e.g., in rough volatility models, degenerate diffusions, heavy-tailed targets), only sublinear or subgeometric (polynomial, stretched-exponential, logarithmic) weak rates are available.
Lower bound results demonstrate that, in many settings, the proven weak rates are sharp—reflecting both algorithmic and analytic limitations (e.g., Talay–Tubaro expansion for SDEs, information-theoretic lower bounds for optimization, tail-driven rates for MCMC) (Altmayer et al., 2016, Brown et al., 2024, Grimmer, 2021).
Table: Representative Weak Convergence Rate Bounds
| Domain | Setting / Model | Weak Rate Bound Example | Reference |
|---|---|---|---|
| Discretized SDE | Log-Heston, | (Altmayer et al., 2016) | |
| Rough Volatility | Hurst | , | (Bayer et al., 2022) |
| SPDE (Allen–Cahn) | Additive noise, degree | (Cai et al., 2019) | |
| Degenerate Diffusion | Weak Poincaré, power tails | (Grothaus et al., 2017) | |
| Inverse Problems | Tikhonov reg., RKHS | (Guastavino et al., 4 Dec 2025) | |
| Adaptive MCMC | Student- target | (Brown et al., 2024) | |
| Probability Flow ODE | Weak log-concave | (Kremling et al., 20 Oct 2025) | |
| Multiscale SDE CLT | McKean–Vlasov systems | for suitable test observables | (Xiang et al., 20 May 2025) |
These results indicate that under general circumstances, the precise weak convergence rate is dictated by the interplay between noise regularity or system degeneracy, the choice of test functions, and underlying analytic/geometric properties of the process or scheme.
References
- (Altmayer et al., 2016) Discretizing the Heston Model: An Analysis of the Weak Convergence Rate
- (Comets et al., 2016) Rate of convergence for polymers in a weak disorder
- (Grothaus et al., 2017) Weak Poincaré Inequalities for Convergence Rate of Degenerate Diffusion Processes
- (Cai et al., 2019) Weak convergence rates for an explicit full-discretization of stochastic Allen-Cahn equation with additive noise
- (Grimmer, 2021) General Holder Smooth Convergence Rates Follow From Specialized Rates Assuming Growth Bounds
- (Bertram et al., 2021) Convergence Rate for Degenerate Partial and Stochastic Differential Equations via weak Poincaré Inequalities
- (Bayer et al., 2022) On the weak convergence rate in the discretization of rough volatility models
- (Li et al., 2023) Convergence Rate Bounds for the Mirror Descent Method: IQCs, Popov Criterion and Bregman Divergence
- (Brown et al., 2024) Upper and lower bounds on the subgeometric convergence of adaptive Markov chain Monte Carlo
- (Xiang et al., 20 May 2025) Weak convergence rates for central limit theorems of multiscale McKean-Vlasov stochastic systems
- (Kremling et al., 20 Oct 2025) Non-asymptotic error bounds for probability flow ODEs under weak log-concavity
- (Guastavino et al., 4 Dec 2025) Weak convergence rates for spectral regularization via sampling inequalities