Span-Based Neural Buffer: Towards Efficient and Effective Utilization of Long-Distance Context for Neural Sequence Models

Authors

  • Yangming Li Ant Financial
  • Kaisheng Yao Ant Financial
  • Libo Qin Harbin Institute of Technology
  • Shuang Peng Ant Financial
  • Yijia Liu Alibaba
  • Xiaolong Li Ant Financial

DOI:

https://doi.org/10.1609/aaai.v34i05.6343

Abstract

Neural sequence model, though widely used for modeling sequential data such as the language model, has sequential recency bias (Kuncoro et al. 2018) to the local context, limiting its full potential to capture long-distance context. To address this problem, this paper proposes augmenting sequence models with a span-based neural buffer that efficiently represents long-distance context, allowing a gate policy network to make interpolated predictions from both the neural buffer and the underlying sequence model. Training this policy network to utilize long-distance context is however challenging due to the simple sentence dominance problem (Marvin and Linzen 2018). To alleviate this problem, we propose a novel training algorithm that combines an annealed maximum likelihood estimation with an intrinsic reward-driven reinforcement learning. Sequence models with the proposed span-based neural buffer significantly improve the state-of-the-art perplexities on the benchmark Penn Treebank and WikiText-2 datasets to 43.9 and 35.2 respectively. We conduct extensive analysis and confirm that the proposed architecture and the training algorithm both contribute to the improvements.

Downloads

Published

2020-04-03

How to Cite

Li, Y., Yao, K., Qin, L., Peng, S., Liu, Y., & Li, X. (2020). Span-Based Neural Buffer: Towards Efficient and Effective Utilization of Long-Distance Context for Neural Sequence Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8277-8284. https://doi.org/10.1609/aaai.v34i05.6343

Issue

Section

AAAI Technical Track: Natural Language Processing