Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Differentiable Alternative to the Lasso Penalty

Published 16 Sep 2016 in stat.ME | (1609.04985v1)

Abstract: Regularized regression has become very popular nowadays, particularly on high-dimensional problems where the addition of a penalty term to the log-likelihood allows inference where traditional methods fail. A number of penalties have been proposed in the literature, such as lasso, SCAD, ridge and elastic net to name a few. Despite their advantages and remarkable performance in rather extreme settings, where $p \gg n$, all these penalties, with the exception of ridge, are non-differentiable at zero. This can be a limitation in certain cases, such as computational efficiency of parameter estimation in non-linear models or derivation of estimators of the degrees of freedom for model selection criteria. With this paper, we provide the scientific community with a differentiable penalty, which can be used in any situation, but particularly where differentiability plays a key role. We show some desirable features of this function and prove theoretical properties of the resulting estimators within a regularized regression context. A simulation study and the analysis of a real dataset show overall a good performance under different scenarios. The method is implemented in the R package DLASSO freely available from CRAN.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.