Papers
Topics
Authors
Recent
Search
2000 character limit reached

Decentralized Learning over Wireless Networks: The Effect of Broadcast with Random Access

Published 12 May 2023 in cs.NI, cs.LG, cs.SY, and eess.SY | (2305.07368v2)

Abstract: In this work, we focus on the communication aspect of decentralized learning, which involves multiple agents training a shared machine learning model using decentralized stochastic gradient descent (D-SGD) over distributed data. In particular, we investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD, considering the broadcast nature of wireless channels and the link dynamics in the communication topology. Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. R. Xin, S. Pu, A. Nedić, and U. A. Khan, “A general framework for decentralized optimization with first-order methods,” Proceedings of the IEEE, vol. 108, no. 11, pp. 1869–1889, 2020.
  2. A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
  3. E. Wei and A. Ozdaglar, “Distributed alternating direction method of multipliers,” in 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012, pp. 5445–5450.
  4. K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1835–1854, 2016.
  5. X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu, “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent,” Advances in neural information processing systems, vol. 30, 2017.
  6. R. Xin, S. Kar, and U. A. Khan, “Decentralized stochastic optimization and machine learning: A unified variance-reduction framework for robust performance and fast convergence,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 102–113, 2020.
  7. R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007.
  8. H. Ye, L. Liang, and G. Y. Li, “Decentralized federated learning with unreliable communications,” IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 3, pp. 487–500, 2022.
  9. H. Xing, O. Simeone, and S. Bi, “Federated learning over wireless device-to-device networks: Algorithms and convergence analysis,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 12, pp. 3723–3741, 2021.
  10. E. Jeong, M. Zecchin, and M. Kountouris, “Asynchronous decentralized learning over unreliable wireless networks,” in IEEE International Conference on Communications, 2022, pp. 607–612.
  11. J. Wang, A. K. Sahu, G. Joshi, and S. Kar, “Matcha: A matching-based link scheduling strategy to speed up distributed optimization,” IEEE Transactions on Signal Processing, vol. 70, pp. 5208–5221, 2022.
  12. C.-C. Chiu, X. Zhang, T. He, S. Wang, and A. Swami, “Laplacian matrix sampling for communication- efficient decentralized learning,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 887–901, 2023.
  13. F. Fagnani and S. Zampieri, “Average consensus with packet drop communication,” SIAM Journal on Control and Optimization, vol. 48, no. 1, pp. 102–133, 2009.
Citations (3)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.