Neuron Interaction Based Representation Composition for Neural Machine Translation

  • Jian Li The Chinese University of Hong Kong
  • Xing Wang Tencent AI Lab
  • Baosong Yang University of Macau
  • Shuming Shi Tencent AI Lab
  • Michael R. Lyu The Chinese University of Hong Kong
  • Zhaopeng Tu Tencent AI Lab

Abstract

Recent NLP studies reveal that substantial linguistic information can be attributed to single neurons, i.e., individual dimensions of the representation vectors. We hypothesize that modeling strong interactions among neurons helps to better capture complex information by composing the linguistic properties embedded in individual neurons. Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e.g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors. Specifically, we leverage bilinear pooling to model pairwise multiplicative interactions among individual neurons, and a low-rank approximation to make the model computationally feasible. We further propose extended bilinear pooling to incorporate first-order representations. Experiments on WMT14 English⇒German and English⇒French translation tasks show that our model consistently improves performances over the SOTA Transformer baseline. Further analyses demonstrate that our approach indeed captures more syntactic and semantic information as expected.

Published
2020-04-03
Section
AAAI Technical Track: Natural Language Processing