Federated Learning with Communication Delay in Edge Networks
Abstract: Federated learning has received significant attention as a potential solution for distributing ML model training through edge networks. This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator. A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step. Through theoretical analysis, an upper bound is derived on the global model loss achieved by FedDelAvg, which reveals a strong dependency of learning performance on the values of the weighting and learning rate. Experimental results on a popular ML task indicate significant improvements in terms of convergence speed when optimizing the weighting scheme to account for delays.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.