Papers
Topics
Authors
Recent
Search
2000 character limit reached

The fast rate of convergence of the smooth adapted Wasserstein distance

Published 13 Mar 2025 in math.PR | (2503.10827v1)

Abstract: Estimating a $d$-dimensional distribution $\mu$ by the empirical measure $\hat{\mu}_n$ of its samples is an important task in probability theory, statistics and machine learning. It is well known that $\mathbb{E}[\mathcal{W}_p(\hat{\mu}_n, \mu)]\lesssim n{-1/d}$ for $d>2p$, where $\mathcal{W}_p$ denotes the $p$-Wasserstein metric. An effective tool to combat this curse of dimensionality is the smooth Wasserstein distance $\mathcal{W}{(\sigma)}_p$, which measures the distance between two probability measures after having convolved them with isotropic Gaussian noise $\mathcal{N}(0,\sigma2\text{I})$. In this paper we apply this smoothing technique to the adapted Wasserstein distance. We show that the smooth adapted Wasserstein distance $\mathcal{A}\mathcal{W}_p{(\sigma)}$ achieves the fast rate of convergence $\mathbb{E}[\mathcal{A}\mathcal{W}_p{(\sigma)}(\hat{\mu}_n, \mu)]\lesssim n{-1/2}$, if $\mu$ is subgaussian. This result follows from the surprising fact, that any subgaussian measure $\mu$ convolved with a Gaussian distribution has locally Lipschitz kernels.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.