Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation

Authors

  • Haiyan Yin Baidu Research
  • Dingcheng Li Baidu Research
  • Xu Li Baidu Research
  • Ping Li Baidu Research

DOI:

https://doi.org/10.1609/aaai.v34i05.6490

Abstract

Training generative models that can generate high-quality text with sufficient diversity is an important open problem for Natural Language Generation (NLG) community. Recently, generative adversarial models have been applied extensively on text generation tasks, where the adversarially trained generators alleviate the exposure bias experienced by conventional maximum likelihood approaches and result in promising generation quality. However, due to the notorious defect of mode collapse for adversarial training, the adversarially trained generators face a quality-diversity trade-off, i.e., the generator models tend to sacrifice generation diversity severely for increasing generation quality. In this paper, we propose a novel approach which aims to improve the performance of adversarial text generation via efficiently decelerating mode collapse of the adversarial training. To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse. Moreover, instead of engaging the cooperative update for the generator in a principled way, we formulate a meta learning mechanism, where the cooperative update to the generator serves as a high level meta task, with an intuition of ensuring the parameters of the generator after the adversarial update would stay resistant against mode collapse. In the experiment, we demonstrate our proposed approach can efficiently slow down the pace of mode collapse for the adversarial text generators. Overall, our proposed method is able to outperform the baseline approaches with significant margins in terms of both generation quality and diversity in the testified domains.

Downloads

Published

2020-04-03

How to Cite

Yin, H., Li, D., Li, X., & Li, P. (2020). Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9466-9473. https://doi.org/10.1609/aaai.v34i05.6490

Issue

Section

AAAI Technical Track: Natural Language Processing