Syntactically Look-Ahead Attention Network for Sentence Compression

Authors

  • Hidetaka Kamigaito Tokyo Institute of Technology
  • Manabu Okumura Tokyo Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v34i05.6315

Abstract

Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words. In sequence-to-sequence (Seq2Seq) based models, the decoder unidirectionally decides to retain or delete words. Thus, it cannot usually explicitly capture the relationships between decoded words and unseen words that will be decoded in the future time steps. Therefore, to avoid generating ungrammatical sentences, the decoder sometimes drops important words in compressing sentences. To solve this problem, we propose a novel Seq2Seq model, syntactically look-ahead attention network (SLAHAN), that can generate informative summaries by explicitly tracking both dependency parent and child words during decoding and capturing important words that will be decoded in the future. The results of the automatic evaluation on the Google sentence compression dataset showed that SLAHAN achieved the best kept-token-based-F1, ROUGE-1, ROUGE-2 and ROUGE-L scores of 85.5, 79.3, 71.3 and 79.1, respectively. SLAHAN also improved the summarization performance on longer sentences. Furthermore, in the human evaluation, SLAHAN improved informativeness without losing readability.

Downloads

Published

2020-04-03

How to Cite

Kamigaito, H., & Okumura, M. (2020). Syntactically Look-Ahead Attention Network for Sentence Compression. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8050-8057. https://doi.org/10.1609/aaai.v34i05.6315

Issue

Section

AAAI Technical Track: Natural Language Processing