Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Use of Information Criteria for Subset Selection in Least Squares Regression

Published 22 Nov 2019 in stat.ME and stat.AP | (1911.10191v3)

Abstract: Least squares (LS)-based subset selection methods are popular in linear regression modeling. Best subset selection (BS) is known to be NP-hard and has a computational cost that grows exponentially with the number of predictors. Recently, Bertsimas (2016) formulated BS as a mixed integer optimization (MIO) problem and largely reduced the computation overhead by using a well-developed optimization solver, but the current methodology is not scalable to very large datasets. In this paper, we propose a novel LS-based method, the best orthogonalized subset selection (BOSS) method, which performs BS upon an orthogonalized basis of ordered predictors and scales easily to large problem sizes. Another challenge in applying LS-based methods in practice is the selection rule to choose the optimal subset size k. Cross-validation (CV) requires fitting a procedure multiple times, and results in a selected k that is random across repeated application to the same dataset. Compared to CV, information criteria only require fitting a procedure once, but they require knowledge of the effective degrees of freedom for the fitting procedure, which is generally not available analytically for complex methods. Since BOSS uses orthogonalized predictors, we first explore a connection for orthogonal non-random predictors between BS and its Lagrangian formulation (i.e., minimization of the residual sum of squares plus the product of a regularization parameter and k), and based on this connection propose a heuristic degrees of freedom (hdf) for BOSS that can be estimated via an analytically-based expression. We show in both simulations and real data analysis that BOSS using a proposed Kullback-Leibler based information criterion AICc-hdf has the strongest performance of all of the LS-based methods considered and is competitive with regularization methods, with the computational effort of a single ordinary LS fit.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.