Embedding Compression with Isotropic Iterative Quantization

Authors

  • Siyu Liao Rutgers University
  • Jie Chen MIT-IBM Watson AI Lab
  • Yanzhi Wang Northeastern University
  • Qinru Qiu Syracuse University
  • Bo Yuan Rutgers University

DOI:

https://doi.org/10.1609/aaai.v34i05.6350

Abstract

Continuous representation of words is a standard component in deep learning-based NLP models. However, representing a large vocabulary requires significant memory, which can cause problems, particularly on resource-constrained platforms. Therefore, in this paper we propose an isotropic iterative quantization (IIQ) approach for compressing embedding vectors into binary ones, leveraging the iterative quantization technique well established for image retrieval, while satisfying the desired isotropic property of PMI based models. Experiments with pre-trained embeddings (i.e., GloVe and HDC) demonstrate a more than thirty-fold compression ratio with comparable and sometimes even improved performance over the original real-valued embedding vectors.

Downloads

Published

2020-04-03

How to Cite

Liao, S., Chen, J., Wang, Y., Qiu, Q., & Yuan, B. (2020). Embedding Compression with Isotropic Iterative Quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8336-8343. https://doi.org/10.1609/aaai.v34i05.6350

Issue

Section

AAAI Technical Track: Natural Language Processing