Masking Orchestration: Multi-Task Pretraining for Multi-Role Dialogue Representation Learning

Authors

  • Tianyi Wang Alibaba Group
  • Yating Zhang Alibaba Group
  • Xiaozhong Liu Indiana University Bloomington
  • Changlong Sun Alibaba Group
  • Qiong Zhang Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v34i05.6459

Abstract

Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc. While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive. In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks where the training objectives are given naturally according to the nature of the utterance and the structure of the multi-role conversation. Meanwhile, in order to locate essential information for dialogue summarization/extraction, the pretraining process enables external knowledge integration. The proposed fine-tuned pretraining mechanism is comprehensively evaluated via three different dialogue datasets along with a number of downstream dialogue-mining tasks. Result shows that the proposed pretraining mechanism significantly contributes to all the downstream tasks without discrimination to different encoders.

Downloads

Published

2020-04-03

How to Cite

Wang, T., Zhang, Y., Liu, X., Sun, C., & Zhang, Q. (2020). Masking Orchestration: Multi-Task Pretraining for Multi-Role Dialogue Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9217-9224. https://doi.org/10.1609/aaai.v34i05.6459

Issue

Section

AAAI Technical Track: Natural Language Processing