Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering

Authors

  • Xiangpeng Li University of Electronic Science and Technology of China
  • Jingkuan Song University of Electronic Science and Technology of China
  • Lianli Gao University of Electronic Science and Technology of China
  • Xianglong Liu Beihang University
  • Wenbing Huang Tencent AI Lab
  • Xiangnan He National University of Singapore
  • Chuang Gan Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33018658

Abstract

Most of the recent progresses on visual question answering are based on recurrent neural networks (RNNs) with attention. Despite the success, these models are often timeconsuming and having difficulties in modeling long range dependencies due to the sequential nature of RNNs. We propose a new architecture, Positional Self-Attention with Coattention (PSAC), which does not require RNNs for video question answering. Specifically, inspired by the success of self-attention in machine translation task, we propose a Positional Self-Attention to calculate the response at each position by attending to all positions within the same sequence, and then add representations of absolute positions. Therefore, PSAC can exploit the global dependencies of question and temporal information in the video, and make the process of question and video encoding executed in parallel. Furthermore, in addition to attending to the video features relevant to the given questions (i.e., video attention), we utilize the co-attention mechanism by simultaneously modeling “what words to listen to” (question attention). To the best of our knowledge, this is the first work of replacing RNNs with selfattention for the task of visual question answering. Experimental results of four tasks on the benchmark dataset show that our model significantly outperforms the state-of-the-art on three tasks and attains comparable result on the Count task. Our model requires less computation time and achieves better performance compared with the RNNs-based methods. Additional ablation study demonstrates the effect of each component of our proposed model.

Downloads

Published

2019-07-17

How to Cite

Li, X., Song, J., Gao, L., Liu, X., Huang, W., He, X., & Gan, C. (2019). Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8658-8665. https://doi.org/10.1609/aaai.v33i01.33018658

Issue

Section

AAAI Technical Track: Vision