Wasserstein Differential Privacy
Abstract: Differential privacy (DP) has achieved remarkable results in the field of privacy-preserving machine learning. However, existing DP frameworks do not satisfy all the conditions for becoming metrics, which prevents them from deriving better basic private properties and leads to exaggerated values on privacy budgets. We propose Wasserstein differential privacy (WDP), an alternative DP framework to measure the risk of privacy leakage, which satisfies the properties of symmetry and triangle inequality. We show and prove that WDP has 13 excellent properties, which can be theoretical supports for the better performance of WDP than other DP frameworks. In addition, we derive a general privacy accounting method called Wasserstein accountant, which enables WDP to be applied in stochastic gradient descent (SGD) scenarios containing sub-sampling. Experiments on basic mechanisms, compositions and deep learning show that the privacy budgets obtained by Wasserstein accountant are relatively stable and less influenced by order. Moreover, the overestimation on privacy budgets can be effectively alleviated. The code is available at https://github.com/Hifipsysta/WDP.
- Deep Learning with Differential Privacy. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security (CCS), 308–318.
- Wasserstein Generative Adversarial Networks. In International Conference on Machine Learning (ICML), 214–223.
- One-Dimensional Empirical Measures, Order Statistics, and Kantorovich Transport Distances. Memoirs of the American Mathematical Society, 261(1259).
- Composable and Versatile Privacy via Truncated CDP. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC), 74–86. ACM.
- Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. In Theory of Cryptography Conference (TCC), volume 9985, 635–658.
- DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy. In Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 6358–6366.
- An Elementary Proof of the Triangle Inequality for the Wasserstein Metric. Proceedings of the American Mathematical Society, 136(1): 333–339.
- Integer Subspace Differential Privacy. In Williams, B.; Chen, Y.; and Neville, J., eds., Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI), 7349–7357. AAAI Press.
- Gaussian Differential Privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1): 3–37.
- Dudley, R. M. 1969. The Speed of Mean Glivenko-Cantelli Convergence. Annals of Mathematical Statistics, 40: 40–50.
- Our Data, Ourselves: Privacy via Distributed Noise Generation. In Vaudenay, S., ed., 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), volume 4004, 486–503. Springer.
- Differential Privacy and Robust Statistics. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), 371–380.
- Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography, Third Theory of Cryptography Conference (TCC), volume 3876, 265–284. Springer.
- The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theory Computer Science, 9(3-4): 211–407.
- Concentrated Differential Privacy. arXiv preprint arXiv:1603.01887.
- Rényi Divergence and Kullback-Leibler Divergence. IEEE Transactions Information Theory, 60(7): 3797–3820.
- Refinements of Pinsker’s inequality. IEEE Transactions on Information Theory, 49(6): 1491–1498.
- On the Rate of Convergence in Wasserstein Distance of the Empirical Measure. Probability Theory and Related Fields, 162: 707–738.
- Subspace Differential Privacy. In Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 3986–3995.
- Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems (NeurIPS), 5767–5777.
- Gromov-Wasserstein Discrepancy with Local Differential Privacy for Distributed Structural Graphs. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI), 2115–2121.
- On a Space of Completely Additive Functions. Vestnik Leningrad Univ, 13(7): 52–59.
- What Can We Learn Privately? SIAM Journal on Computing, 40(3): 793–826.
- Learning Multiple Layers of Features from Tiny Images. Handbook of Systemic Autoimmune Diseases, 1(4).
- Gradient-based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11): 2278–2324.
- McSherry, F. 2009. Privacy Integrated Queries: An Extensible Platform for Privacy-Preserving Data Analysis. In Proceedings of ACM International Conference on Management of Data (SIGMOD), 19–30.
- Mironov, I. 2017. Rényi Differential Privacy. In 30th IEEE Computer Security Foundations Symposium (CSF), 263–275.
- Reading Digits in Natural Images with Unsupervised Feature Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning.
- Monoparametric Family of Metrics Derived from Classical Jensen–Shannon Divergence. Physica A: Statistical Mechanics and its Applications, 495: 336–344.
- Statistical Aspects of Wasserstein Distances. Annual Review of Statistics and Its Application, 6(1).
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness. In International Joint Conference on Artificial Intelligence (IJCAI), 4753–4759.
- Differentially Private Sliced Wasserstein Distance. In Proceedings of the 38th International Conference on Machine Learning (ICML), volume 139, 8810–8820.
- Cross entropy, Dissimilarity Measures, and Characterizations of Quadratic Entropy. IEEE Transactions on Information Theory, 31(5): 589–593.
- Rüschendorf, L. 2009. Optimal Transport. Old and New. Jahresbericht der Deutschen Mathematiker-Vereinigung, 111(2): 18–21.
- Privacy-Preserving Deep Learning. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security (CCS), 1310–1321.
- Membership Inference Attacks Against Machine Learning Models. In IEEE Symposium on Security and Privacy (SP), 3–18.
- Differentially Private Optimal Transport: Application to Domain Adaptation. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), 2852–2858.
- Bayesian Differential Privacy for Machine Learning. In International Conference on Machine Learning (ICML), 9583–9592.
- Regression Model Fitting under Differential Privacy and Model Inversion Attack. In International Joint Conference on Artificial Intelligence (IJCAI), 1003–1009.
- Winkelbauer, A. 2012. Moments and Absolute Moments of the Normal Distribution. arXiv preprint arXiv:1209.4340.
- Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv preprint arXiv:1708.07747.
- Deep Leakage from Gradients. In Advances in Neural Information Processing Systems (NeurIPS), 14747–14756.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.