Gaussian-Smoothed Sliced Probability Divergences
Abstract: Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown that it provides performances similar to its non-smoothed (non-private) counterpart. However, the computationaland statistical properties of such a metric have not yet been well-established. This work investigates the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian-smoothed sliced divergences. We first show that smoothing and slicing preserve the metric property and the weak topology. To study the sample complexity of such divergences, we then introduce $\hat{\hat\mu}{n}$ the double empirical distribution for the smoothed-projected $\mu$. The distribution $\hat{\hat\mu}{n}$ is a result of a double sampling process: one from sampling according to the origin distribution $\mu$ and the second according to the convolution of the projection of $\mu$ on the unit sphere and the Gaussian smoothing. We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it converges with a rate $O(n{-1/2})$. We also derive other properties, including continuity, of different divergences with respect to the smoothing parameter. We support our theoretical findings with empirical studies in the context of privacy-preserving domain adaptation.
- Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017.
- Measure Theory and Probability Theory. Springer Texts in Statistics. Springer, 2006. ISBN 9780387329031.
- Spot: Sliced partial optimal transport. 38(4), 2019. ISSN 0730-0301.
- Displacement interpolation using lagrangian mass transport. ACM Trans. Graph., 30(6):158:1–158:12, 2011. ISSN 0730-0301.
- Nicolas Bonnotte. Unidimensional and Evolution Methods for Optimal Transportation. Theses, Université Paris Sud - Paris XI ; Scuola normale superiore (Pise, Italie), December 2013.
- An Introductory Course in Functional Analysis. Universitext. Springer New York, 2014. ISBN 9781493919444.
- Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 39(9):1853–1865, 2016.
- H. Cramér and H. Wold. Some theorems on distribution functions. Journal of the London Mathematical Society, s1-11(4):290–294, 1936.
- Max-sliced Wasserstein distance and its use for gans. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10640–10648, 2019.
- The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
- On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3):707–738, Aug 2015. ISSN 1432-2064.
- Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp. 1180–1189. PMLR, 2015.
- Learning generative models with sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pp. 1608–1617. PMLR, 2018.
- Convergence of smoothed empirical measures with applications to entropy estimation. IEEE Transactions on Information Theory, 66(7):4368–4391, 2020.
- A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
- Peter J. Huber. Robust Statistics, pp. 1248–1251. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. ISBN 978-3-642-04898-2.
- Leonid V. Kantorovich. On the transfer of masses (in russian). Doklady Akademii Nauk, 2:227–229, 1942.
- Davar Khoshnevisan. Probability. Graduate studies in mathematics. American Mathematical Soc., 2007. ISBN 9780821884010.
- Sliced Wasserstein kernels for probability distributions. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5258–5267, 2016.
- Optimal mass transport: Signal processing and machine-learning applications. IEEE Signal Processing Magazine, 34(4):43–59, July 2017. ISSN 1053-5888.
- Sliced Wasserstein auto-encoders. In International Conference on Learning Representations, 2019a.
- Generalized sliced Wasserstein distances. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 261–272. Curran Associates, Inc., 2019b.
- Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10285–10295, 2019.
- On projection robust optimal transport: Sample complexity and model misspecification. In International Conference on Artificial Intelligence and Statistics, pp. 262–270. PMLR, 2021.
- Learning transferable features with deep adaptation networks. In International conference on machine learning, pp. 97–105. PMLR, 2015.
- Gaspard Monge. Mémoire sur la théotie des déblais et des remblais. Histoire de l’Académie Royale des Sciences, pp. 666–704, 1781.
- Statistical and topological properties of sliced probability divergences. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
- Distributional sliced-wasserstein and applications to generative modeling. In International Conference on Learning Representations, 2021.
- Hierarchical sliced wasserstein distance. arXiv preprint arXiv:2209.13570, 2022.
- Self-attention amortized distributional projection optimization for sliced wasserstein point-cloud reconstruction. In International Conference on Machine Learning, pp. 26008–26030. PMLR, 2023.
- Markovian sliced wasserstein distances: Beyond independent projections. Advances in Neural Information Processing Systems, 36, 2024.
- Smooth p𝑝pitalic_p-wasserstein distance: Structure, empirical approximation, and statistical applications. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8172–8183. PMLR, 18–24 Jul 2021.
- Frank W. J. Olver. NIST handbook of mathematical functions hardback and CD-ROM. Cambridge university press, 2010.
- Computational optimal transport. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019. ISSN 1935-8237.
- Mass Transportation Problems: Volume I: Theory. Mass Transportation Problems. Springer, 1998. ISBN 9780387983509.
- Differentially private sliced wasserstein distance. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8810–8820. PMLR, 18–24 Jul 2021.
- Improving GANs using optimal transport. In International Conference on Learning Representations, 2018.
- Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Trans. Graph., 34(4):66:1–66:11, 2015.
- Generative models and model criticism via optimized maximum mean discrepancy. In ICLR (Poster), 2017.
- Cédric Villani. Optimal Transport: Old and New, volume 338 of Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2009. ISBN 9783540710509.
- Andreas Winkelbauer. Moments and absolute moments of the normal distribution, 2014.
- Sliced wasserstein generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3713–3722, 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.