Papers
Topics
Authors
Recent
Search
2000 character limit reached

Active Regression via Linear-Sample Sparsification

Published 27 Nov 2017 in cs.LG and cs.DS | (1711.10051v3)

Abstract: We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution $\beta*$ minimizing $|X\beta - y|_2$ given the entire unlabeled dataset $X \in \mathbb{R}{n \times d}$ but only observing a small number of labels $y_i$. We show that $O(d)$ labels suffice to find a constant factor approximation $\tilde{\beta}$: [ \mathbb{E}[|X\tilde{\beta} - y|_22] \leq 2 \mathbb{E}[|X \beta* - y|_22]. ] This improves on the best previous result of $O(d \log d)$ from leverage score sampling. We also present results for the \emph{inductive} setting, showing when $\tilde{\beta}$ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.

Citations (52)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.