Papers
Topics
Authors
Recent
Search
2000 character limit reached

Convergence to Second-Order Stationarity for Constrained Non-Convex Optimization

Published 4 Oct 2018 in math.OC | (1810.02024v2)

Abstract: We consider the problem of finding an approximate second-order stationary point of a constrained non-convex optimization problem. We first show that, unlike the gradient descent method for unconstrained optimization, the vanilla projected gradient descent algorithm may converge to a strict saddle point even when there is only a single linear constraint. We then provide a hardness result by showing that checking $(\epsilon_g,\epsilon_H)$-second order stationarity is NP-hard even in the presence of linear constraints. Despite our hardness result, we identify instances of the problem for which checking second order stationarity can be done efficiently. For such instances, we propose a dynamic second order Frank--Wolfe algorithm which converges to ($\epsilon_g, \epsilon_H$)-second order stationary points in ${\mathcal{O}}(\max{\epsilon_g{-2}, \epsilon_H{-3}})$ iterations. The proposed algorithm can be used in general constrained non-convex optimization as long as the constrained quadratic sub-problem can be solved efficiently.

Citations (29)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.