Distributional Semantics Meets Multi-Label Learning

  • Vivek Gupta University of Utah
  • Rahul Wadbude AlphaGrep
  • Nagarajan Natarajan Microsoft Research
  • Harish Karnick Indian Institute of Technology Kanpur
  • Prateek Jain Microsoft Research
  • Piyush Rai Indian Institute of Technology Kanpur

Abstract

We present a label embedding based approach to large-scale multi-label learning, drawing inspiration from ideas rooted in distributional semantics, specifically the Skip Gram Negative Sampling (SGNS) approach, widely used to learn word embeddings. Besides leading to a highly scalable model for multi-label learning, our approach highlights interesting connections between label embedding methods commonly used for multi-label learning and paragraph embedding methods commonly used for learning representations of text data. The framework easily extends to incorporating auxiliary information such as label-label correlations; this is crucial especially when many training instances are only partially annotated. To facilitate end-to-end learning, we develop a joint learning algorithm that can learn the embeddings as well as a regression model that predicts these embeddings for the new input to be annotated, via efficient gradient based methods. We demonstrate the effectiveness of our approach through an extensive set of experiments on a variety of benchmark datasets, and show that the proposed models perform favorably as compared to state-of-the-art methods for large-scale multi-label learning.

Published
2019-07-17
How to Cite
Gupta, V., Wadbude, R., Natarajan, N., Karnick, H., Jain, P., & Rai, P. (2019). Distributional Semantics Meets Multi-Label Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3747-3754. https://doi.org/10.1609/aaai.v33i01.33013747