Papers
Topics
Authors
Recent
Search
2000 character limit reached

Lasso tuning through the flexible-weighted bootstrap

Published 10 Mar 2019 in stat.ME | (1903.03935v1)

Abstract: Regularized regression approaches such as the Lasso have been widely adopted for constructing sparse linear models in high-dimensional datasets. A complexity in fitting these models is the tuning of the parameters which control the level of introduced sparsity through penalization. The most common approach to select the penalty parameter is through $k$-fold cross-validation. While cross-validation is used to minimise the empirical prediction error, approaches such as the $m$-out-of-$n$ paired bootstrap which use smaller training datasets provide consistency in selecting the non-zero coefficients in the oracle model, performing well in an asymptotic setting but having limitations when $n$ is small. In fact, for models such as the Lasso there is a monotonic relationship between the size of training sets and the penalty parameter. We propose a generalization of these methods for selecting the regularization parameter based on a flexible-weighted bootstrap procedure that mimics the $m$-out-of-$n$ bootstrap and overcomes its challenges for all sample sizes. Through simulation studies we demonstrate that when selecting a penalty parameter, the choice of weights in the bootstrap procedure can be used to dictate the size of the penalty parameter and hence the sparsity of the fitted model. We empirically illustrate our weighted bootstrap procedure by applying the Lasso to integrate clinical and microRNA data in the modeling of Alzheimer's disease. In both the real and simulated data we find a narrow part of the parameter space to perform well, emulating an $m$-out-of-$n$ bootstrap, and that our procedure can be used to improve interpretation of other optimization heuristics.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.