Visual Agreement Regularized Training for Multi-Modal Machine Translation

Authors

  • Pengcheng Yang Peking University
  • Boxing Chen Alibaba DAMO Academy
  • Pei Zhang Alibaba DAMO Academy
  • Xu Sun Peking University

DOI:

https://doi.org/10.1609/aaai.v34i05.6484

Abstract

Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-to-target and target-to-source translation models and encourages them to share the same focus on the visual information when generating semantically equivalent visual words (e.g. “ball” in English and “ballon” in French). Besides, a simple yet effective multi-head co-attention model is also introduced to capture interactions between visual and textual features. The results show that our approaches can outperform competitive baselines by a large margin on the Multi30k dataset. Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.

Downloads

Published

2020-04-03

How to Cite

Yang, P., Chen, B., Zhang, P., & Sun, X. (2020). Visual Agreement Regularized Training for Multi-Modal Machine Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9418-9425. https://doi.org/10.1609/aaai.v34i05.6484

Issue

Section

AAAI Technical Track: Natural Language Processing