Papers
Topics
Authors
Recent
Search
2000 character limit reached

Improved Convergence Rate for a Distributed Two-Time-Scale Gradient Method under Random Quantization

Published 28 May 2021 in eess.SY and cs.SY | (2105.14089v1)

Abstract: We study the so-called distributed two-time-scale gradient method for solving convex optimization problems over a network of agents when the communication bandwidth between the nodes is limited, and so information that is exchanged between the nodes must be quantized. Our main contribution is to provide a novel analysis, resulting to an improved convergence rate of this method as compared to the existing works. In particular, we show that the method converges at a rate $O(log_2 k/\sqrt k)$ to the optimal solution, when the underlying objective function is strongly convex and smooth. The key technique in our analysis is to consider a Lyapunov function that simultaneously captures the coupling of the consensus and optimality errors generated by the method.

Citations (10)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.