Papers
Topics
Authors
Recent
Search
2000 character limit reached

How Does Bayes Error Limit Probabilistic Robust Accuracy

Published 23 May 2024 in cs.LG | (2405.14923v1)

Abstract: Adversarial examples pose a security threat to many critical systems built on neural networks. Given that deterministic robustness often comes with significantly reduced accuracy, probabilistic robustness (i.e., the probability of having the same label with a vicinity is $\ge 1-\kappa$) has been proposed as a promising way of achieving robustness whilst maintaining accuracy. However, existing training methods for probabilistic robustness still experience non-trivial accuracy loss. It is unclear whether there is an upper bound on the accuracy when optimising towards probabilistic robustness, and whether there is a certain relationship between $\kappa$ and this bound. This work studies these problems from a Bayes error perspective. We find that while Bayes uncertainty does affect probabilistic robustness, its impact is smaller than that on deterministic robustness. This reduced Bayes uncertainty allows a higher upper bound on probabilistic robust accuracy than that on deterministic robust accuracy. Further, we prove that with optimal probabilistic robustness, each probabilistically robust input is also deterministically robust in a smaller vicinity. We also show that voting within the vicinity always improves probabilistic robust accuracy and the upper bound of probabilistic robust accuracy monotonically increases as $\kappa$ grows. Our empirical findings also align with our results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Synthesizing robust adversarial examples. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 284–293. PMLR, 10–15 Jul 2018.
  2. Adversarial training and provable defenses: Bridging the gap. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  3. Maximum a posteriori estimators as a limit of bayes estimators. Mathematical Programming, 174(1):129–144, Mar 2019.
  4. Empirically estimable classification bounds based on a nonparametric divergence measure. IEEE Transactions on Signal Processing, 64(3):580–591, 2016.
  5. A survey on: Facial emotion recognition invariant to pose, illumination and age. In 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), pages 1–6. IEEE, 2019.
  6. Evaluating classification model against bayes error rate. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8):9639–9653, 2023.
  7. Certified defenses for adversarial patches. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  8. Certified adversarial robustness via randomized smoothing. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1310–1320. PMLR, PMLR, 09–15 Jun 2019.
  9. K. Fukunaga and L. Hostetler. k-nearest-neighbor bayes-risk estimation. IEEE Transactions on Information Theory, 21(3):285–293, 1975.
  10. Keinosuke Fukunaga. Introduction to Statistical Pattern Recognition (2nd Ed.). Academic Press Professional, Inc., USA, 1990.
  11. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016.
  12. F.D. Garber and A. Djouadi. Bounds on the bayes classification error based on pairwise risk functions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(2):281–288, 1988.
  13. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
  14. Certifying emergency landing for safe urban uav. In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pages 55–62, 2021.
  15. Is the performance of my deep network too good to be true? a direct approach to estimating the bayes error in binary classification. In The Eleventh International Conference on Learning Representations, 2023.
  16. Glow: Generative flow with invertible 1x1 convolutions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
  17. Learning multiple layers of features from tiny images. 2009.
  18. Adversarial examples in the physical world. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017.
  19. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
  20. Sok: Certified robustness for deep neural networks. In 44th IEEE Symposium on Security and Privacy, SP 2023, San Francisco, CA, USA, 22-26 May 2023. IEEE, 2023.
  21. Towards practical robustness analysis for dnns based on pac-model learning. In Proceedings of the 44th International Conference on Software Engineering, ICSE ’22, page 2189–2201, New York, NY, USA, 2022. Association for Computing Machinery.
  22. Robustness verification of classification deep neural networks via linear programming. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11418–11427, June 2019.
  23. Adaptiveface: Adaptive margin and sampling for face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  24. Characterizing adversarial subspaces using local intrinsic dimensionality. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
  25. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
  26. Meta learning of bounds on the bayes classifier error. In 2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE), pages 13–18, 2015.
  27. Certified training: Small boxes are all you need. CoRR, abs/2210.04871, 2022.
  28. Frank Nielsen. Generalized bhattacharyya and chernoff upper bounds on bayes error using quasi-arithmetic means. Pattern Recognition Letters, 42:25–34, 2014.
  29. Learning to benchmark: Determining best achievable misclassification error from training data. arXiv preprint arXiv:1909.07192, 2019.
  30. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  31. Human uncertainty makes classification more robust. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  32. Evaluating bayes error estimators on real-world datasets with feebee. In Joaquin Vanschoren and Sai-Kit Yeung, editors, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021.
  33. Brian D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, 1996.
  34. Probabilistically robust learning: Balancing average and worst-case performance. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 18667–18686. PMLR, 17–23 Jul 2022.
  35. Learning to bound the multi-class bayes error. IEEE Transactions on Signal Processing, 68:3793–3807, 2020.
  36. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 1528–1540, New York, NY, USA, 2016. Association for Computing Machinery.
  37. Fast certified robust training with short warmup. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 18335–18349. Curran Associates, Inc., 2021.
  38. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):60, Jul 2019.
  39. An abstract domain for certifying neural networks. Proc. ACM Program. Lang., 3(POPL), jan 2019.
  40. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
  41. Evaluating state-of-the-art classification models against bayes optimality. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 9367–9377. Curran Associates, Inc., 2021.
  42. On adaptive attacks to adversarial example defenses. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1633–1645. Curran Associates, Inc., 2020.
  43. Robot: Robustness-oriented testing for deep learning systems. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pages 300–311, 2021.
  44. Improving adversarial robustness requires revisiting misclassified examples. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  45. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017.
  46. Automatic perturbation analysis for scalable certified robustness and beyond. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1129–1141. Curran Associates, Inc., 2020.
  47. Theoretically principled trade-off between robustness and accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 7472–7482. PMLR, 09–15 Jun 2019.
  48. The limitations of adversarial training and the blind-spot attack. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
  49. Efficient neural network robustness certification with general activation functions. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, volume 31 of NIPS’18, page 4944–4953, Red Hook, NY, USA, 2018. Curran Associates Inc.
  50. Coophance: Cooperative enhancement for robustness of deep learning systems. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2023, page 753–765, New York, NY, USA, 2023. Association for Computing Machinery.
  51. Certified robust accuracy of neural networks are bounded due to bayes errors. In Computer Aided Verification. Springer International Publishing, 2024.
  52. Proa: A probabilistic robustness assessment against functional perturbations. In Massih-Reza Amini, Stéphane Canu, Asja Fischer, Tias Guns, Petra Kralj Novak, and Grigorios Tsoumakas, editors, Machine Learning and Knowledge Discovery in Databases, pages 154–170, Cham, 2023. Springer Nature Switzerland.
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.