Papers
Topics
Authors
Recent
Search
2000 character limit reached

Reliable Learning of Halfspaces under Gaussian Marginals

Published 18 Nov 2024 in cs.LG, cs.DS, and stat.ML | (2411.11238v1)

Abstract: We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012). The reliable PAC model captures learning scenarios where one type of error is costlier than the others. Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on $\mathbb{R}d$ with sample and computational complexity $$d{O(\log (\min{1/\alpha, 1/\epsilon}))}\min (2{\log(1/\epsilon){O(\log (1/\alpha))}},2{\mathrm{poly}(1/\epsilon)})\;,$$ where $\epsilon$ is the excess error and $\alpha$ is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the $d{\Omega(\log (1/\alpha))}$ dependence is best possible. Conceptually, our results imply a strong computational separation between reliable agnostic learning and standard agnostic learning of halfspaces in the Gaussian setting.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.