Fully Convolutional Video Captioning with Coarse-to-Fine and Inherited Attention

  • Kuncheng Fang Fudan University
  • Lian Zhou Fudan University
  • Cheng Jin Fudan University
  • Yuejie Zhang Fudan University
  • Kangnian Weng Shanghai University of Finance and Economics
  • Tao Zhang Shanghai University of Finance and Economics
  • Weiguo Fan University of Iowa

Abstract

Automatically generating natural language description for video is an extremely complicated and challenging task. To tackle the obstacles of traditional LSTM-based model for video captioning, we propose a novel architecture to generate the optimal descriptions for videos, which focuses on constructing a new network structure that can generate sentences superior to the basic model with LSTM, and establishing special attention mechanisms that can provide more useful visual information for caption generation. This scheme discards the traditional LSTM, and exploits the fully convolutional network with coarse-to-fine and inherited attention designed according to the characteristics of fully convolutional structure. Our model cannot only outperform the basic LSTM-based model, but also achieve the comparable performance with those of state-of-the-art methods

Published
2019-07-17