Papers
Topics
Authors
Recent
Search
2000 character limit reached

Provable Robustness of Adversarial Training for Learning Halfspaces with Noise

Published 19 Apr 2021 in cs.LG, cs.CR, math.OC, and stat.ML | (2104.09437v1)

Abstract: We analyze the properties of adversarial training for learning adversarially robust halfspaces in the presence of agnostic label noise. Denoting $\mathsf{OPT}{p,r}$ as the best robust classification error achieved by a halfspace that is robust to perturbations of $\ell{p}$ balls of radius $r$, we show that adversarial training on the standard binary cross-entropy loss yields adversarially robust halfspaces up to (robust) classification error $\tilde O(\sqrt{\mathsf{OPT}{2,r}})$ for $p=2$, and $\tilde O(d{1/4} \sqrt{\mathsf{OPT}{\infty, r}} + d{1/2} \mathsf{OPT}{\infty,r})$ when $p=\infty$. Our results hold for distributions satisfying anti-concentration properties enjoyed by log-concave isotropic distributions among others. We additionally show that if one instead uses a nonconvex sigmoidal loss, adversarial training yields halfspaces with an improved robust classification error of $O(\mathsf{OPT}{2,r})$ for $p=2$, and $O(d{1/4}\mathsf{OPT}_{\infty, r})$ when $p=\infty$. To the best of our knowledge, this is the first work to show that adversarial training provably yields robust classifiers in the presence of noise.

Citations (11)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.