Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring

Authors

  • Xiyao Ma University of Florida
  • Qile Zhu University of Florida
  • Yanlin Zhou University of Florida
  • Xiaolin Li Tongdun Technology

DOI:

https://doi.org/10.1609/aaai.v34i05.6366

Abstract

Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation. However, we observe that these approaches often generate wrong question words or keywords and copy answer-irrelevant words from the input. We believe that lacking global question semantics and exploiting answer position-awareness not well are the key root causes. In this paper, we propose a neural question generation model with two general modules: sentence-level semantic matching and answer position inferring. Further, we enhance the initial state of the decoder by leveraging the answer-aware gated fusion mechanism. Experimental results demonstrate that our model outperforms the state-of-the-art (SOTA) models on SQuAD and MARCO datasets. Owing to its generality, our work also improves the existing models significantly.

Downloads

Published

2020-04-03

How to Cite

Ma, X., Zhu, Q., Zhou, Y., & Li, X. (2020). Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8464-8471. https://doi.org/10.1609/aaai.v34i05.6366

Issue

Section

AAAI Technical Track: Natural Language Processing