Papers
Topics
Authors
Recent
Search
2000 character limit reached

Exponential Convergence for Distributed Smooth Optimization Under the Restricted Secant Inequality Condition

Published 7 Sep 2019 in math.OC | (1909.03282v1)

Abstract: This paper considers the distributed smooth optimization problem in which the objective is to minimize a global cost function formed by a sum of local smooth cost functions, by using local information exchange. The standard assumption for proving exponential/linear convergence of first-order methods is the strong convexity of the cost functions, which does not hold for many practical applications. In this paper, we first show that the continuous-time distributed primal-dual gradient algorithm converges to one global minimizer exponentially under the assumption that the global cost function satisfies the restricted secant inequality condition. This condition is weaker than the strong convexity condition since it does not require convexity and the global minimizers are not necessary to be unique. We then show that the discrete-time distributed primal-dual algorithm constructed by using the Euler's approximation method converges to one global minimizer linearly under the same condition. The theoretical results are illustrated by numerical simulations.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.