Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Study on Distributed Strategies for Deep Learning Applications in GPU Clusters

Published 19 May 2025 in cs.DC | (2505.12832v1)

Abstract: As deep learning models grow in size and complexity, training them efficiently on single GPUs becomes increasingly infeasible. This study investigates the effectiveness of several distributed training strategies-Distributed Data Parallel (DDP), Fully Sharded Data Parallelism (FSDP), and Parameter Server (PS) models-for scalable deep learning on GPU clusters. We conduct empirical evaluations across multiple models and datasets to assess trade-offs in memory usage, training time, GPU utilization, and model accuracy. Our results show that while FSDP reduces GPU memory usage by over 60%, it increases training time by up to 6x compared to DDP. In contrast, asynchronous PS training improves throughput but can lead to degraded accuracy due to stale updates. Through comprehensive analysis, we provide practical insights into the strengths and limitations of each strategy, offering guidance for selecting suitable methods based on system constraints and training objectives.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.