Papers
Topics
Authors
Recent
Search
2000 character limit reached

BPPSA: Scaling Back-propagation by Parallel Scan Algorithm

Published 23 Jul 2019 in cs.LG, cs.DC, and stat.ML | (1907.10134v3)

Abstract: In an era when the performance of a single compute device plateaus, software must be designed to scale on massively parallel systems for better runtime performance. However, in the context of training deep learning models, the popular back-propagation (BP) algorithm imposes a strong sequential dependency in the process of gradient computation. Under model parallelism, BP takes $\Theta (n)$ steps to complete which hinders its scalability on parallel systems ($n$ represents the number of compute devices into which a model is partitioned). In this work, in order to improve the scalability of BP, we reformulate BP into a scan operation which is a primitive that performs an in-order aggregation on a sequence of values and returns the partial result at each step. We can then scale such reformulation of BP on parallel systems by our modified version of the Blelloch scan algorithm which theoretically takes $\Theta (\log n)$ steps. We evaluate our approach on a vanilla Recurrent Neural Network (RNN) training with synthetic datasets and a RNN with Gated Recurrent Units (GRU) training with the IRMAS dataset, and demonstrate up to $2.75\times$ speedup on the overall training time and $108\times$ speedup on the backward pass. We also demonstrate that the retraining of pruned networks can be a practical use case of our method.

Citations (6)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.