Relevance-Promoting Language Model for Short-Text Conversation

  • Xin Li The Chinese University of Hong Kong
  • Piji Li Tencent AI Lab
  • Wei Bi Tencent AI Lab
  • Xiaojiang Liu Tencent AI Lab
  • Wai Lam The Chinese University of Hong Kong

Abstract

Despite the effectiveness of sequence-to-sequence framework on the task of Short-Text Conversation (STC), the issue of under-exploitation of training data (i.e., the supervision signals from query text is ignored) still remains unresolved. Also, the adopted maximization-based decoding strategies, inclined to generating the generic responses or responses with repetition, are unsuited to the STC task. In this paper, we propose to formulate the STC task as a language modeling problem and tailor-make a training strategy to adapt a language model for response generation. To enhance generation performance, we design a relevance-promoting transformer language model, which performs additional supervised source attention after the self-attention to increase the importance of informative query tokens in calculating the token-level representation. The model further refines the query representation with relevance clues inferred from its multiple references during training. In testing, we adopt a randomization-over-maximization strategy to reduce the generation of generic responses. Experimental results on a large Chinese STC dataset demonstrate the superiority of the proposed model on relevance metrics and diversity metrics.1

Published
2020-04-03
Section
AAAI Technical Track: Natural Language Processing