Feature Deformation Meta-Networks in Image Captioning of Novel Objects

Authors

  • Tingjia Cao Fudan University
  • Ke Han Fudan University
  • Xiaomei Wang Fudan University
  • Lin Ma Tencent AI Lab
  • Yanwei Fu Fudan University
  • Yu-Gang Jiang Fudan University
  • Xiangyang Xue Fudan University

DOI:

https://doi.org/10.1609/aaai.v34i07.6620

Abstract

This paper studies the task of image captioning with novel objects, which only exist in testing images. Intrinsically, this task can reflect the generalization ability of models in understanding and captioning the semantic meanings of visual concepts and objects unseen in training set, sharing the similarity to one/zero-shot learning. The critical difficulty thus comes from that no paired images and sentences of the novel objects can be used to help train the captioning model. Inspired by recent work (Chen et al. 2019b) that boosts one-shot learning by learning to generate various image deformations, we propose learning meta-networks for deforming features for novel object captioning. To this end, we introduce the feature deformation meta-networks (FDM-net), which is trained on source data, and learn to adapt to the novel object features detected by the auxiliary detection model. FDM-net includes two sub-nets: feature deformation, and scene graph sentence reconstruction, which produce the augmented image features and corresponding sentences, respectively. Thus, rather than directly deforming images, FDM-net can efficiently and dynamically enlarge the paired images and texts by learning to deform image features. Extensive experiments are conducted on the widely used novel object captioning dataset, and the results show the effectiveness of our FDM-net. Ablation study and qualitative visualization further give insights of our model.

Downloads

Published

2020-04-03

How to Cite

Cao, T., Han, K., Wang, X., Ma, L., Fu, Y., Jiang, Y.-G., & Xue, X. (2020). Feature Deformation Meta-Networks in Image Captioning of Novel Objects. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10494-10501. https://doi.org/10.1609/aaai.v34i07.6620

Issue

Section

AAAI Technical Track: Vision