Word Embedding as Maximum A Posteriori Estimation

Authors

  • Shoaib Jameel University of Kent
  • Zihao Fu The Chinese University of Hong Kong
  • Bei Shi Tencent AI Lab
  • Wai Lam The Chinese University of Hong Kong
  • Steven Schockaert Cardiff University

DOI:

https://doi.org/10.1609/aaai.v33i01.33016562

Abstract

The GloVe word embedding model relies on solving a global optimization problem, which can be reformulated as a maximum likelihood estimation problem. In this paper, we propose to generalize this approach to word embedding by considering parametrized variants of the GloVe model and incorporating priors on these parameters. To demonstrate the usefulness of this approach, we consider a word embedding model in which each context word is associated with a corresponding variance, intuitively encoding how informative it is. Using our framework, we can then learn these variances together with the resulting word vectors in a unified way. We experimentally show that the resulting word embedding models outperform GloVe, as well as many popular alternatives.

Downloads

Published

2019-07-17

How to Cite

Jameel, S., Fu, Z., Shi, B., Lam, W., & Schockaert, S. (2019). Word Embedding as Maximum A Posteriori Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6562-6569. https://doi.org/10.1609/aaai.v33i01.33016562

Issue

Section

AAAI Technical Track: Natural Language Processing