GRET: Global Representation Enhanced Transformer

Authors

  • Rongxiang Weng Alibaba Group
  • Haoran Wei Alibaba Group
  • Shujian Huang Nanjing University
  • Heng Yu Alibaba Group
  • Lidong Bing Alibaba Group
  • Weihua Luo Alibaba Group
  • Jiajun Chen Nanjing University

DOI:

https://doi.org/10.1609/aaai.v34i05.6464

Abstract

Transformer, based on the encoder-decoder framework, has achieved state-of-the-art performance on several natural language generation tasks. The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence. These hidden states usually correspond to the input words and focus on capturing local information. However, the global (sentence level) information is seldom explored, leaving room for the improvement of generation quality. In this paper, we propose a novel global representation enhanced Transformer (GRET) to explicitly model global representation in the Transformer network. Specifically, in the proposed model, an external state is generated for the global representation from the encoder. The global representation is then fused into the decoder during the decoding process to improve generation quality. We conduct experiments in two text generation tasks: machine translation and text summarization. Experimental results on four WMT machine translation tasks and LCSTS text summarization task demonstrate the effectiveness of the proposed approach on natural language generation1.

Downloads

Published

2020-04-03

How to Cite

Weng, R., Wei, H., Huang, S., Yu, H., Bing, L., Luo, W., & Chen, J. (2020). GRET: Global Representation Enhanced Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9258-9265. https://doi.org/10.1609/aaai.v34i05.6464

Issue

Section

AAAI Technical Track: Natural Language Processing