Papers
Topics
Authors
Recent
Search
2000 character limit reached

Estimation Network Design framework for efficient distributed optimization

Published 23 Apr 2024 in math.OC, cs.DC, cs.LG, and cs.MA | (2404.15273v1)

Abstract: Distributed decision problems features a group of agents that can only communicate over a peer-to-peer network, without a central memory. In applications such as network control and data ranking, each agent is only affected by a small portion of the decision vector: this sparsity is typically ignored in distributed algorithms, while it could be leveraged to improve efficiency and scalability. To address this issue, our paper introduces Estimation Network Design (END), a graph theoretical language for the analysis and design of distributed iterations. END algorithms can be tuned to exploit the sparsity of specific problem instances, reducing communication overhead and minimizing redundancy, yet without requiring case-by-case convergence analysis. In this paper, we showcase the flexility of END in the context of distributed optimization. In particular, we study the sparsity-aware version of many established methods, including ADMM, AugDGM and Push-Sum DGD. Simulations on an estimation problem in sensor networks demonstrate that END algorithms can boost convergence speed and greatly reduce the communication and memory cost.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. M. Bianchi and S. Grammatico, “The END: Estimation Network Design for games under partial-decision information,” IEEE Transactions on Control of Network Systems, Accepted for publication. [Online]. Available: https://arxiv.org/abs/2208.11377
  2. I. Necoara and D. Clipici, “Parallel random coordinate descent method for composite minimization: Convergence analysis and error bounds,” SIAM Journal on Optimization, vol. 26, no. 1, pp. 197–226, 2016.
  3. P. Richtárik and M. Takáč, “Distributed coordinate descent method for learning with big data,” Journal of Machine Learning Research, vol. 17, 2016.
  4. A. Nedić, A. Olshevsky, , and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol. 27, no. 4, pp. 2597–2633, 2017.
  5. J. Xu, Y. Tian, Y. Sun, and G. Scutari, “Distributed algorithms for composite optimization: Unified framework and convergence analysis,” IEEE Transactions on Signal Processing, vol. 69, pp. 3555–3570, 2021.
  6. I. Notarnicola, R. Carli, and G. Notarstefano, “Distributed partitioned big-data optimization via asynchronous dual decomposition,” IEEE Transactions on Control of Network Systems, vol. 5, no. 4, pp. 1910–1919, 2018.
  7. T. Erseghe, “A distributed and scalable processing method based upon ADMM,” IEEE Signal Processing Letters, vol. 19, no. 9, pp. 563–566, 2012.
  8. M. Todescato, N. Bof, G. Cavraro, R. Carli, and L. Schenato, “Partition-based multi-agent optimization in the presence of lossy and asynchronous communication,” Automatica, vol. 111, p. 108648, 2020.
  9. P. Giselsson, M. D. Doan, T. Keviczky, B. D. Schutter, and A. Rantzer, “Accelerated gradient methods and dual decomposition in distributed model predictive control,” Automatica, vol. 49, no. 3, pp. 829–833, 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0005109813000101
  10. E. Dall’Anese, H. Zhu, and G. B. Giannakis, “Distributed optimal power flow for smart microgrids,” IEEE Transactions on Smart Grid, vol. 4, no. 3, pp. 1464–1475, 2013.
  11. J. F. Mota, J. M. Xavier, P. M. Aguiar, and M. Puschel, “Distributed optimization with local domains: Applications in MPC and network flows,” IEEE Transactions on Automatic Control, vol. 60, no. 7, pp. 2004–2009, 2015.
  12. S. A. Alghunaim, K. Yuan, and A. H. Sayed, “A proximal diffusion strategy for multiagent optimization with sparse affine constraints,” IEEE Transactions on Automatic Control, vol. 65, no. 11, pp. 4554–4567, 2020.
  13. S. A. Alghunaim and A. H. Sayed, “Distributed coupled multiagent stochastic optimization,” IEEE Transactions on Automatic Control, vol. 65, no. 1, pp. 175–190, 2020.
  14. P. Rebeschini and S. Tatikonda, “Locality in network optimization,” IEEE Transactions on Control of Network Systems, vol. 6, 2019.
  15. R. Brown, F. Rossi, K. Solovey, M. Tsao, M. T. Wolf, and M. Pavone, “On local computation for network-structured convex optimization in multi-agent systems,” IEEE Transactions on Control of Network Systems, 2021.
  16. J. Xu, S. Zhu, Y. C. Soh, and L. Xie, “Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes,” in 2015 54th IEEE Conference on Decision and Control (CDC), 2015, pp. 2055–2060.
  17. A. Nedić and A. Olshevsky, “Distributed optimization over time-varying directed graphs,” IEEE Transactions on Automatic Control, vol. 60, no. 3, pp. 601–615, 2015.
  18. P. D. Lorenzo and G. Scutari, “NEXT: In-network nonconvex optimization,” IEEE Transactions on Signal and Information Processing over Networks, vol. 2, no. 2, pp. 120–136.
  19. M. Bianchi and S. Grammatico, “Fully distributed Nash equilibrium seeking over time-varying communication networks with linear convergence rate,” IEEE Control Systems Letters, vol. 5, no. 2, pp. 499–504, 2021.
  20. J. C. Parinya and Fakcharoenphol, “Simple distributed algorithms for approximating minimum Steiner trees,” L. Wang, Ed.   Springer Berlin Heidelberg, 2005, pp. 380–389.
  21. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, 2010.
  22. N. Bastianello, R. Carli, L. Schenato, and M. Todescato, “Asynchronous distributed optimization over lossy networks via relaxed ADMM: Stability and linear convergence,” IEEE Transactions on Automatic Control, vol. 66, no. 6, pp. 2620–2635.
  23. C. A. Uribe, S. Lee, A. Gasnikov, and A. Nedić, “A dual approach for optimal algorithms in distributed optimization over networks,” Optimization Methods and Software, vol. 36, pp. 1–37, 2021.
  24. W. Shi, Q. Ling, G. Wu, and W. Yin, “EXTRA: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944–966, 2015.
  25. Z. Li, W. Shi, and M. Yan, “A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates,” IEEE Transactions on Signal Processing, vol. 67, no. 17, pp. 4494–4506.
  26. G. Qu and N. Li, “Harnessing smoothness to accelerate distributed optimization,” IEEE Transactions on Control of Network Systems, vol. 5, pp. 159–166, 2018.
  27. A. Falsone, I. Notarnicola, G. Notarstefano, and M. Prandini, “Tracking-ADMM for distributed constraint-coupled optimization,” Automatica, vol. 117, p. 108962, 2020.
  28. X. Li, G. Feng, and L. Xie, “Distributed proximal algorithms for multiagent optimization with coupled inequality constraints,” IEEE Transactions on Automatic Control, vol. 66, no. 3, pp. 1223–1230, 2021.
  29. M. Charikar, C. Chekuri, T. yat Cheung, Z. Dai, A. Goel, S. Guha, and M. Li, “Approximation algorithms for directed Steiner problems,” Journal of Algorithms, vol. 33, pp. 73–91, 1999.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 0 likes about this paper.