Papers
Topics
Authors
Recent
Search
2000 character limit reached

Joint Regularization on Activations and Weights for Efficient Neural Network Pruning

Published 19 Jun 2019 in cs.LG and stat.ML | (1906.07875v2)

Abstract: With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency. This work aims to advance the compression beyond the weights to neuron activations. We propose the joint regularization technique which simultaneously regulates the distribution of weights and activations. By distinguishing and leveraging the significance difference among neuron responses and connections during learning, the jointly pruned network, namely \textit{JPnet}, optimizes the sparsity of activations and weights for improving execution efficiency. The derived deep sparsification of JPnet reveals more optimization space for the existing DNN accelerators dedicated for sparse matrix operations. We thoroughly evaluate the effectiveness of joint regularization through various network models with different activation functions and on different datasets. With $0.4\%$ degradation constraint on inference accuracy, a JPnet can save $72.3\% \sim 98.8\%$ of computation cost compared to the original dense models, with up to $5.2\times$ and $12.3\times$ reductions in activation and weight numbers, respectively.

Authors (4)
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.