Papers
Topics
Authors
Recent
Search
2000 character limit reached

Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning

Published 11 Jun 2023 in cs.CV | (2306.06634v1)

Abstract: Multi-Teacher knowledge distillation provides students with additional supervision from multiple pre-trained teachers with diverse information sources. Most existing methods explore different weighting strategies to obtain a powerful ensemble teacher, while ignoring the student with poor learning ability may not benefit from such specialized integrated knowledge. To address this problem, we propose Adaptive Multi-teacher Knowledge Distillation with Meta-Learning (MMKD) to supervise student with appropriate knowledge from a tailored ensemble teacher. With the help of a meta-weight network, the diverse yet compatible teacher knowledge in the output layer and intermediate layers is jointly leveraged to enhance the student performance. Extensive experiments on multiple benchmark datasets validate the effectiveness and flexibility of our methods. Code is available: https://github.com/Rorozhl/MMKD.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.