AAAI Publications, The Thirtieth International Flairs Conference

Font Size: 
Can Word Embeddings Help Find Latent Emotions in Text? Preliminary Results
Armin Seyeditabari, Wlodek Zadrozny

Last modified: 2017-05-03

Abstract


We report results of several experiments evaluating performance of word embeddings on semantic similarity of emotions. Our experiments suggest that the standard embeddings like GloVe and Word2Vec have very limited applicability in identifying emotions in text. Namely, using the standard arithmetic of emotions as a test, we show the mean reciprocal rank of a correct response is about 0.24, that is, combinations of word vectors are not a good proxy for expressed emotions. For example, the sum vector Joy+Fear, contrary to expectations, is not close to the vector representing Guilt. In addition, the opposite emotions, like Pessimism and Delight, have relatively high similarity to each other as word vectors (on average 0.2-0.44). Another experiment shows relatively low similarity (0.2-0.3) of word embeddings for similar emotions, such as Anger and Envy. Thus the standard methods for producing word embeddings are not adequate to represent relationships between emotion words. We conclude with a few hypotheses about improving the accuracy of embeddings in representing emotions.

Keywords


natural language processing, emotion, embedding, vector space

Full Text: PDF