Ultrafast Video Attention Prediction with Coupled Knowledge Distillation

Authors

  • Kui Fu State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University
  • Peipei Shi iQIYI, Inc
  • Yafei Song National Engineering Laboratory for Video Technology, School of EE&CS, Peking University
  • Shiming Ge Institute of Information Engineering, Chinese Academy of Sciences
  • Xiangju Lu iQIYI, Inc
  • Jia Li State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University

DOI:

https://doi.org/10.1609/aaai.v34i07.6710

Abstract

Large convolutional neural network models have recently demonstrated impressive performance on video attention prediction. Conventionally, these models are with intensive computation and large memory. To address these issues, we design an extremely light-weight network with ultrafast speed, named UVA-Net. The network is constructed based on depth-wise convolutions and takes low-resolution images as input. However, this straight-forward acceleration method will decrease performance dramatically. To this end, we propose a coupled knowledge distillation strategy to augment and train the network effectively. With this strategy, the model can further automatically discover and emphasize implicit useful cues contained in the data. Both spatial and temporal knowledge learned by the high-resolution complex teacher networks also can be distilled and transferred into the proposed low-resolution light-weight spatiotemporal network. Experimental results show that the performance of our model is comparable to 11 state-of-the-art models in video attention prediction, while it costs only 0.68 MB memory footprint, runs about 10,106 FPS on GPU and 404 FPS on CPU, which is 206 times faster than previous models.

Downloads

Published

2020-04-03

How to Cite

Fu, K., Shi, P., Song, Y., Ge, S., Lu, X., & Li, J. (2020). Ultrafast Video Attention Prediction with Coupled Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10802-10809. https://doi.org/10.1609/aaai.v34i07.6710

Issue

Section

AAAI Technical Track: Vision