Enriching Word Embeddings with a Regressor Instead of Labeled Corpora

Authors

  • Mohamed Abdalla University of Toronto
  • Magnus Sahlgren Research Institutes of Sweden
  • Graeme Hirst University of Toronto

DOI:

https://doi.org/10.1609/aaai.v33i01.33016188

Abstract

We propose a novel method for enriching word-embeddings without the need of a labeled corpus. Instead, we show that relying on a regressor – trained with a small lexicon to predict pseudo-labels – significantly improves performance over current techniques that rely on human-derived sentence-level labels for an entire corpora. Our approach enables enrichment for corpora that have no labels (such as Wikipedia). Exploring the utility of this general approach in both sentiment and non-sentiment-focused tasks, we show how enriching embeddings, for both Twitter and Wikipedia-based embeddings, provide notable improvements in performance for: binary sentiment classification, SemEval Tasks, embedding analogy task, and, document classification. Importantly, our approach is notably better and more generalizable than other state-of-the-art approaches for enriching both labeled and unlabeled corpora.

Downloads

Published

2019-07-17

How to Cite

Abdalla, M., Sahlgren, M., & Hirst, G. (2019). Enriching Word Embeddings with a Regressor Instead of Labeled Corpora. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6188-6195. https://doi.org/10.1609/aaai.v33i01.33016188

Issue

Section

AAAI Technical Track: Natural Language Processing