Newton-type Methods with the Proximal Gradient Step for Sparse Estimation
Abstract: In this paper, we propose new methods to efficiently solve convex optimization problems encountered in sparse estimation, which include a new quasi-Newton method that avoids computing the Hessian matrix and improves efficiency, and we prove its fast convergence. We also prove the local convergence of the Newton method under weaker assumptions. Our proposed methods offer a more efficient and effective approach, particularly for L1 regularization and group regularization problems, as they involve variable selection with each update. Through numerical experiments, we demonstrate the efficiency of our methods in solving problems encountered in sparse estimation. Our contributions include theoretical guarantees and practical applications for various problems.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.