Online Knowledge Distillation with Diverse Peers

Authors

  • Defang Chen Zhejiang University
  • Jian-Ping Mei Zhejiang University of Technology
  • Can Wang Zhejiang University
  • Yan Feng Zhejiang University
  • Chun Chen Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v34i04.5746

Abstract

Distillation is an effective knowledge-transfer technique that uses predicted distributions of a powerful teacher model as soft targets to train a less-parameterized student model. A pre-trained high capacity teacher, however, is not always available. Recently proposed online variants use the aggregated intermediate predictions of multiple student models as targets to train each student model. Although group-derived targets give a good recipe for teacher-free distillation, group members are homogenized quickly with simple aggregation functions, leading to early saturated solutions. In this work, we propose Online Knowledge Distillation with Diverse peers (OKDDip), which performs two-level distillation during training with multiple auxiliary peers and one group leader. In the first-level distillation, each auxiliary peer holds an individual set of aggregation weights generated with an attention-based mechanism to derive its own targets from predictions of other auxiliary peers. Learning from distinct target distributions helps to boost peer diversity for effectiveness of group-based distillation. The second-level distillation is performed to transfer the knowledge in the ensemble of auxiliary peers further to the group leader, i.e., the model used for inference. Experimental results show that the proposed framework consistently gives better performance than state-of-the-art approaches without sacrificing training or inference complexity, demonstrating the effectiveness of the proposed two-level distillation framework.

Downloads

Published

2020-04-03

How to Cite

Chen, D., Mei, J.-P., Wang, C., Feng, Y., & Chen, C. (2020). Online Knowledge Distillation with Diverse Peers. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3430-3437. https://doi.org/10.1609/aaai.v34i04.5746

Issue

Section

AAAI Technical Track: Machine Learning