Papers
Topics
Authors
Recent
Search
2000 character limit reached

PipeMare: Asynchronous Pipeline Parallel DNN Training

Published 9 Oct 2019 in cs.DC, cs.LG, and stat.ML | (1910.05124v2)

Abstract: Pipeline parallelism (PP) when training neural networks enables larger models to be partitioned spatially, leading to both lower network communication and overall higher hardware utilization. Unfortunately, to preserve the statistical efficiency of sequential training, existing PP techniques sacrifice hardware efficiency by decreasing pipeline utilization or incurring extra memory costs. In this paper, we investigate to what extent these sacrifices are necessary. We devise PipeMare, a simple yet robust training method that tolerates asynchronous updates during PP execution without sacrificing utilization or memory, which allows efficient use of fine-grained pipeline parallelism. Concretely, when tested on ResNet and Transformer networks, asynchrony enables PipeMare to use up to $2.7\times$ less memory or get $4.3\times$ higher pipeline utilization, with similar model quality, when compared to state-of-the-art synchronous PP training techniques.

Citations (105)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.