Joint Commonsense and Relation Reasoning for Image and Video Captioning

Authors

  • Jingyi Hou Beijing Institute of Technology
  • Xinxiao Wu Beijing Institute of Technology
  • Xiaoxun Zhang Alibaba Group
  • Yayun Qi Beijing Institute of Technology
  • Yunde Jia Beijing Institute of Technology
  • Jiebo Luo University of Rochester

DOI:

https://doi.org/10.1609/aaai.v34i07.6731

Abstract

Exploiting relationships between objects for image and video captioning has received increasing attention. Most existing methods depend heavily on pre-trained detectors of objects and their relationships, and thus may not work well when facing detection challenges such as heavy occlusion, tiny-size objects, and long-tail classes. In this paper, we propose a joint commonsense and relation reasoning method that exploits prior knowledge for image and video captioning without relying on any detectors. The prior knowledge provides semantic correlations and constraints between objects, serving as guidance to build semantic graphs that summarize object relationships, some of which cannot be directly perceived from images or videos. Particularly, our method is implemented by an iterative learning algorithm that alternates between 1) commonsense reasoning for embedding visual regions into the semantic space to build a semantic graph and 2) relation reasoning for encoding semantic graphs to generate sentences. Experiments on several benchmark datasets validate the effectiveness of our prior knowledge-based approach.

Downloads

Published

2020-04-03

How to Cite

Hou, J., Wu, X., Zhang, X., Qi, Y., Jia, Y., & Luo, J. (2020). Joint Commonsense and Relation Reasoning for Image and Video Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10973-10980. https://doi.org/10.1609/aaai.v34i07.6731

Issue

Section

AAAI Technical Track: Vision