Improving Natural Language Inference Using External Knowledge in the Science Questions Domain

  • Xiaoyan Wang University of Illinois at Urbana-Champaign
  • Pavan Kapanipathi IBM Research
  • Ryan Musa IBM Research
  • Mo Yu IBM Research
  • Kartik Talamadupula IBM Research
  • Ibrahim Abdelaziz IBM Research
  • Maria Chang IBM Research
  • Achille Fokoue IBM Research
  • Bassem Makni IBM Research
  • Nicholas Mattei IBM Research
  • Michael Witbrock IBM Research

Abstract

Natural Language Inference (NLI) is fundamental to many Natural Language Processing (NLP) applications including semantic search and question answering. The NLI problem has gained significant attention due to the release of large scale, challenging datasets. Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect to a given hypothesis. Surprisingly, the use of methods based on structured knowledge – a central topic in artificial intelligence – has not received much attention vis-a-vis the NLI problem. While there are many open knowledge bases that contain various types of reasoning information, their use for NLI has not been well explored. To address this, we present a combination of techniques that harness external knowledge to improve performance on the NLI problem in the science questions domain. We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem. Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.

Published
2019-07-17
Section
AAAI Technical Track: Natural Language Processing