Fully Convolutional Video Captioning with Coarse-to-Fine and Inherited Attention

Authors

  • Kuncheng Fang Fudan University
  • Lian Zhou Fudan University
  • Cheng Jin Fudan University
  • Yuejie Zhang Fudan University
  • Kangnian Weng Shanghai University of Finance and Economics
  • Tao Zhang Shanghai University of Finance and Economics
  • Weiguo Fan University of Iowa

DOI:

https://doi.org/10.1609/aaai.v33i01.33018271

Abstract

Automatically generating natural language description for video is an extremely complicated and challenging task. To tackle the obstacles of traditional LSTM-based model for video captioning, we propose a novel architecture to generate the optimal descriptions for videos, which focuses on constructing a new network structure that can generate sentences superior to the basic model with LSTM, and establishing special attention mechanisms that can provide more useful visual information for caption generation. This scheme discards the traditional LSTM, and exploits the fully convolutional network with coarse-to-fine and inherited attention designed according to the characteristics of fully convolutional structure. Our model cannot only outperform the basic LSTM-based model, but also achieve the comparable performance with those of state-of-the-art methods

Downloads

Published

2019-07-17

How to Cite

Fang, K., Zhou, L., Jin, C., Zhang, Y., Weng, K., Zhang, T., & Fan, W. (2019). Fully Convolutional Video Captioning with Coarse-to-Fine and Inherited Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8271-8278. https://doi.org/10.1609/aaai.v33i01.33018271

Issue

Section

AAAI Technical Track: Vision