Papers
Topics
Authors
Recent
Search
2000 character limit reached

Communication-Efficient Adaptive Batch Size Strategies for Distributed Local Gradient Methods

Published 20 Jun 2024 in stat.ML, cs.LG, and math.OC | (2406.13936v2)

Abstract: Modern deep neural networks often require distributed training with many workers due to their large size. As the number of workers increases, communication overheads become the main bottleneck in data-parallel minibatch stochastic gradient methods with per-iteration gradient synchronization. Local gradient methods like Local SGD reduce communication by only synchronizing model parameters and/or gradients after several local steps. Despite an understanding of their convergence and the importance of batch sizes for training efficiency and generalization, optimal batch sizes for local gradient methods are difficult to determine. We introduce adaptive batch size strategies for local gradient methods that increase batch sizes adaptively to reduce minibatch gradient variance. We provide convergence guarantees under homogeneous data conditions and support our claims with image classification and language modeling experiments, demonstrating the effectiveness of our strategies for both training efficiency and generalization.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.