Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Risk Ratio Comparison of $l_0$ and $l_1$ Penalized Regression

Published 21 Oct 2015 in math.ST, stat.ME, and stat.TH | (1510.06319v1)

Abstract: There has been an explosion of interest in using $l_1$-regularization in place of $l_0$-regularization for feature selection. We present theoretical results showing that while $l_1$-penalized linear regression never outperforms $l_0$-regularization by more than a constant factor, in some cases using an $l_1$ penalty is infinitely worse than using an $l_0$ penalty. We also show that the "optimal" $l_1$ solutions are often inferior to $l_0$ solutions found using stepwise regression. We also compare algorithms for solving these two problems and show that although solutions can be found efficiently for the $l_1$ problem, the "optimal" $l_1$ solutions are often inferior to $l_0$ solutions found using greedy classic stepwise regression. Furthermore, we show that solutions obtained by solving the convex $l_1$ problem can be improved by selecting the best of the $l_1$ models (for different regularization penalties) by using an $l_0$ criterion. In other words, an approximate solution to the right problem can be better than the exact solution to the wrong problem.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.