AAAI Publications, Thirty-First AAAI Conference on Artificial Intelligence

Font Size: 
Learning Context-Specific Word/Character Embeddings
Xiaoqing Zheng, Jiangtao Feng, Yi Chen, Haoyuan Peng, Wenqing Zhang

Last modified: 2017-02-12

Abstract


Unsupervised word representations have demonstrated improvements in predictive generalization on various NLP tasks. Most of the existing models are in fact good at capturing the relatedness among words rather than their ''genuine'' similarity because the context representations are often represented by a sum (or an average) of the neighbor's embeddings, which simplifies the computation but ignores an important fact that the meaning of a word is determined by its context, reflecting not only the surrounding words but also the rules used to combine them (i.e. compositionality). On the other hand, much effort has been devoted to learning a single-prototype representation per word, which is problematic because many words are polysemous, and a single-prototype model is incapable of capturing phenomena of homonymy and polysemy. We present a neural network architecture to jointly learn word embeddings and context representations from large data sets. The explicitly produced context representations are further used to learn context-specific and multi-prototype word embeddings. Our embeddings were evaluated on several NLP tasks, and the experimental results demonstrated the proposed model outperformed other competitors and is applicable to intrinsically "character-based" languages.

Keywords


Word embeddings; Neural network; Unsupervised Learning

Full Text: PDF