Data Augmentation for Spoken Language Understanding via Joint Variational Generation

Authors

  • Kang Min Yoo Seoul National University
  • Youhyun Shin Seoul National University
  • Sang-goo Lee Seoul National University

DOI:

https://doi.org/10.1609/aaai.v33i01.33017402

Abstract

Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SLU) due to the high cost of creating manually tagged SLU datasets. Recent works in neural text generative models, particularly latent variable models such as variational autoencoder (VAE), have shown promising results in regards to generating plausible and natural sentences. In this paper, we propose a novel generative architecture which leverages the generative power of latent variable models to jointly synthesize fully annotated utterances. Our experiments show that existing SLU models trained on the additional synthetic examples achieve performance gains. Our approach not only helps alleviate the data scarcity issue in the SLU task for many datasets but also indiscriminately improves language understanding performances for various SLU models, supported by extensive experiments and rigorous statistical testing.

Downloads

Published

2019-07-17

How to Cite

Yoo, K. M., Shin, Y., & Lee, S.- goo. (2019). Data Augmentation for Spoken Language Understanding via Joint Variational Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7402-7409. https://doi.org/10.1609/aaai.v33i01.33017402

Issue

Section

AAAI Technical Track: Natural Language Processing