Papers
Topics
Authors
Recent
Search
2000 character limit reached

Distributed Gradient Descent: Nonconvergence to Saddle Points and the Stable-Manifold Theorem

Published 7 Aug 2019 in math.OC and cs.MA | (1908.02747v2)

Abstract: The paper studies a distributed gradient descent (DGD) process and considers the problem of showing that in nonconvex optimization problems, DGD typically converges to local minima rather than saddle points. The paper considers unconstrained minimization of a smooth objective function. In centralized settings, the problem of demonstrating nonconvergence to saddle points of gradient descent (and variants) is typically handled by way of the stable-manifold theorem from classical dynamical systems theory. However, the classical stable-manifold theorem is not applicable in distributed settings. The paper develops an appropriate stable-manifold theorem for DGD showing that convergence to saddle points may only occur from a low-dimensional stable manifold. Under appropriate assumptions (e.g., coercivity), this result implies that DGD typically converges to local minima and not to saddle points.

Citations (14)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.