Character n-Gram Embeddings to Improve RNN Language Models

Authors

  • Sho Takase NTT
  • Jun Suzuki Tohoku University
  • Masaaki Nagata NTT

DOI:

https://doi.org/10.1609/aaai.v33i01.33015074

Abstract

This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character ngram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks

Downloads

Published

2019-07-17

How to Cite

Takase, S., Suzuki, J., & Nagata, M. (2019). Character n-Gram Embeddings to Improve RNN Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5074-5082. https://doi.org/10.1609/aaai.v33i01.33015074

Issue

Section

AAAI Technical Track: Machine Learning