Papers
Topics
Authors
Recent
Search
2000 character limit reached

Data Efficient Stagewise Knowledge Distillation

Published 15 Nov 2019 in cs.LG and cs.CV | (1911.06786v3)

Abstract: Despite the success of Deep Learning (DL), the deployment of modern DL models requiring large computational power poses a significant problem for resource-constrained systems. This necessitates building compact networks that reduce computations while preserving performance. Traditional Knowledge Distillation (KD) methods that transfer knowledge from teacher to student (a) use a single-stage and (b) require the whole data set while distilling the knowledge to the student. In this work, we propose a new method called Stagewise Knowledge Distillation (SKD) which builds on traditional KD methods by progressive stagewise training to leverage the knowledge gained from the teacher, resulting in data-efficient distillation process. We evaluate our method on classification and semantic segmentation tasks. We show, across the tested tasks, significant performance gains even with a fraction of the data used in distillation, without compromising on the metric. We also compare our method with existing KD techniques and show that SKD outperforms them. Moreover, our method can be viewed as a generalized model compression technique that complements other model compression methods such as quantization or pruning.

Citations (3)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.