Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition

  • Shiming Ge Chinese Academy of Sciences
  • Kangkai Zhang Chinese Academy of Sciences
  • Haolin Liu Chinese Academy of Sciences
  • Yingying Hua Chinese Academy of Sciences
  • Shengwei Zhao Chinese Academy of Sciences
  • Xin Jin Beijing Electronic Science and Technology Institute
  • Hao Wen CloudWalk Technology Co., Ltd

Abstract

In spite of great success in many image recognition tasks achieved by recent deep models, directly applying them to recognize low-resolution images may suffer from low accuracy due to the missing of informative details during resolution degradation. However, these images are still recognizable for subjects who are familiar with the corresponding high-resolution ones. Inspired by that, we propose a teacher-student learning approach to facilitate low-resolution image recognition via hybrid order relational knowledge distillation. The approach refers to three streams: the teacher stream is pretrained to recognize high-resolution images in high accuracy, the student stream is learned to identify low-resolution images by mimicking the teacher's behaviors, and the extra assistant stream is introduced as bridge to help knowledge transfer across the teacher to the student. To extract sufficient knowledge for reducing the loss in accuracy, the learning of student is supervised with multiple losses, which preserves the similarities in various order relational structures. In this way, the capability of recovering missing details of familiar low-resolution images can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on metric learning, low-resolution image classification and low-resolution face recognition tasks show the effectiveness of our approach, while taking reduced models.

Published
2020-04-03
Section
AAAI Technical Track: Vision