Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Family of Inexact SQA Methods for Non-Smooth Convex Minimization with Provable Convergence Guarantees Based on the Luo-Tseng Error Bound Property

Published 24 May 2016 in math.OC and math.NA | (1605.07522v2)

Abstract: We propose a new family of inexact sequential quadratic approximation (SQA) methods, which we call the inexact regularized proximal Newton ($\textsf{IRPN}$) method, for minimizing the sum of two closed proper convex functions, one of which is smooth and the other is possibly non-smooth. Our proposed method features strong convergence guarantees even when applied to problems with degenerate solutions while allowing the inner minimization to be solved inexactly. Specifically, we prove that when the problem possesses the so-called Luo-Tseng error bound (EB) property, $\textsf{IRPN}$ converges globally to an optimal solution, and the local convergence rate of the sequence of iterates generated by $\textsf{IRPN}$ is linear, superlinear, or even quadratic, depending on the choice of parameters of the algorithm. Prior to this work, such EB property has been extensively used to establish the linear convergence of various first-order methods. However, to the best of our knowledge, this work is the first to use the Luo-Tseng EB property to establish the superlinear convergence of SQA-type methods for non-smooth convex minimization. As a consequence of our result, $\textsf{IRPN}$ is capable of solving regularized regression or classification problems under the high-dimensional setting with provable convergence guarantees. We compare our proposed $\textsf{IRPN}$ with several empirically efficient algorithms by applying them to the $\ell_1$-regularized logistic regression problem. Experiment results show the competitiveness of our proposed method.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.