Attending to Entities for Better Text Understanding

Authors

  • Pengxiang Cheng The University of Texas at Austin
  • Katrin Erk The University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v34i05.6254

Abstract

Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BERT, XLNet, etc.) based on Transformer (Vaswani et al. 2017), and in a range of end tasks, such models have achieved state-of-the-art results, approaching human performance. This clearly demonstrates the power of the stacked self-attention architecture when paired with a sufficient number of layers and a large amount of pre-training data. However, on tasks that require complex and long-distance reasoning where surface-level cues are not enough, there is still a large gap between the pre-trained models and human performance. Strubell et al. (2018) recently showed that it is possible to inject knowledge of syntactic structure into a model through supervised self-attention. We conjecture that a similar injection of semantic knowledge, in particular, coreference information, into an existing model would improve performance on such complex problems. On the LAMBADA (Paperno et al. 2016) task, we show that a model trained from scratch with coreference as auxiliary supervision for self-attention outperforms the largest GPT-2 model, setting the new state-of-the-art, while only containing a tiny fraction of parameters compared to GPT-2. We also conduct a thorough analysis of different variants of model architectures and supervision configurations, suggesting future directions on applying similar techniques to other problems.

Downloads

Published

2020-04-03

How to Cite

Cheng, P., & Erk, K. (2020). Attending to Entities for Better Text Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7554-7561. https://doi.org/10.1609/aaai.v34i05.6254

Issue

Section

AAAI Technical Track: Natural Language Processing