Neural Machine Translation with Byte-Level Subwords

Authors

  • Changhan Wang Facebook AI Research
  • Kyunghyun Cho Facebook AI Research
  • Jiatao Gu Facebook AI Research

DOI:

https://doi.org/10.1609/aaai.v34i05.6451

Abstract

Almost all existing machine translation models are built on top of character-based vocabularies: characters, subwords or words. Rare characters from noisy text or character-rich languages such as Japanese and Chinese however can unnecessarily take up vocabulary slots and limit its compactness. Representing text at the level of bytes and using the 256 byte set as vocabulary is a potential solution to this issue. High computational cost has however prevented it from being widely deployed or used in practice. In this paper, we investigate byte-level subwords, specifically byte-level BPE (BBPE), which is compacter than character vocabulary and has no out-of-vocabulary tokens, but is more efficient than using pure bytes only is. We claim that contextualizing BBPE embeddings is necessary, which can be implemented by a convolutional or recurrent layer. Our experiments show that BBPE has comparable performance to BPE while its size is only 1/8 of that for BPE. In the multilingual setting, BBPE maximizes vocabulary sharing across many languages and achieves better translation quality. Moreover, we show that BBPE enables transferring models between languages with non-overlapping character sets.

Downloads

Published

2020-04-03

How to Cite

Wang, C., Cho, K., & Gu, J. (2020). Neural Machine Translation with Byte-Level Subwords. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9154-9160. https://doi.org/10.1609/aaai.v34i05.6451

Issue

Section

AAAI Technical Track: Natural Language Processing