Privacy Amplification for the Gaussian Mechanism via Bounded Support
Abstract: Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset. These guarantees can be desirable compared to vanilla DP in real world settings as they tightly upper-bound the privacy leakage for a $\textit{specific}$ individual in an $\textit{actual}$ dataset, rather than considering worst-case datasets. While these frameworks are beginning to gain popularity, to date, there is a lack of private mechanisms that can fully leverage advantages of data-dependent accounting. To bridge this gap, we propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting. Experiments on model training with DP-SGD show that using bounded support Gaussian mechanisms can provide a reduction of the pDP bound $\epsilon$ by as much as 30% without negative effects on model utility.
- Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016.
- The skellam mechanism for differentially private federated learning. Advances in Neural Information Processing Systems, 34:5052–5064, 2021.
- Differential privacy has disparate impact on model accuracy. Advances in neural information processing systems, 32, 2019.
- signsgd with majority vote is communication efficient and fault tolerant. arXiv preprint arXiv:1810.05291, 2018.
- Individualized pate: Differentially private machine learning with individual privacy guarantees. arXiv preprint arXiv:2202.10517, 2022.
- The discrete gaussian for differential privacy. Advances in Neural Information Processing Systems, 33:15676–15688, 2020.
- The poisson binomial mechanism for unbiased federated learning with secure aggregation. In International Conference on Machine Learning, pages 3490–3506. PMLR, 2022.
- Privacy amplification via compression: Achieving the optimal privacy-accuracy-communication trade-off in distributed mean estimation. arXiv preprint arXiv:2304.01541, 2023.
- Y. Dagan and G. Kur. A bounded-noise mechanism for differential privacy. In Conference on Learning Theory, pages 625–661. PMLR, 2022.
- Unlocking high-accuracy differentially private image classification through scale. arXiv preprint arXiv:2204.13650, 2022.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009. 10.1109/CVPR.2009.5206848.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211–407, 2014.
- Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. Springer, 2006.
- V. Feldman and T. Zrnic. Individual privacy accounting via a renyi filter. Advances in Neural Information Processing Systems, 34:28080–28091, 2021.
- The interpolated mvu mechanism for communication-efficient private federated learning. arXiv preprint arXiv:2211.03942, 2022a.
- Bounding training data reconstruction in private (deep) learning. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 8056–8071. PMLR, 17–23 Jul 2022b.
- Measuring data leakage in machine-learning models with fisher information. In Uncertainty in Artificial Intelligence, pages 760–770. PMLR, 2021.
- The bounded laplace mechanism in differential privacy. arXiv preprint arXiv:1808.10410, 2018.
- Stochastic-sign sgd for federated learning with theoretical guarantees. arXiv preprint arXiv:2002.10940, 2020.
- Individual privacy accounting with gaussian differential privacy. arXiv preprint arXiv:2209.15596, 2022.
- Learning multiple layers of features from tiny images. 2009.
- F. Liu. Generalized gaussian mechanism for differential privacy. IEEE Transactions on Knowledge and Data Engineering, 31(4):747–756, 2018.
- I. Mironov. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF), pages 263–275. IEEE, 2017.
- R\\\backslash\’enyi differential privacy of the sampled gaussian mechanism. arXiv preprint arXiv:1908.10530, 2019.
- Smooth sensitivity and sampling in private data analysis. In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pages 75–84, 2007.
- Dp-raft: A differentially private recipe for accelerated fine-tuning. arXiv preprint arXiv:2212.04486, 2022.
- Cats and dogs. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 3498–3505, 2012. 10.1109/CVPR.2012.6248092.
- Beit v2: Masked image modeling with vector-quantized visual tokenizers. arXiv preprint arXiv:2208.06366, 2022.
- R. Redberg and Y.-X. Wang. Privately publishable per-instance privacy. Advances in Neural Information Processing Systems, 34:17335–17346, 2021.
- M. J. Schervish. Theory of statistics. Springer Science & Business Media, 2012.
- T. Van Erven and P. Harremos. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60(7):3797–3820, 2014.
- Y.-X. Wang. Per-instance differential privacy. Journal of Privacy and Confidentiality, 9(1), 2019.
- Per-instance privacy accounting for differentially private stochastic gradient descent. arXiv preprint arXiv:2206.02617, 2022.
- S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.