Improving GAN with Neighbors Embedding and Gradient Matching

Authors

  • Ngoc-Trung Tran Singapore University of Technology and Design
  • Tuan-Anh Bui Singapore University of Technology and Design
  • Ngai-Man Cheung Singapore University of Technology and Design

DOI:

https://doi.org/10.1609/aaai.v33i01.33015191

Abstract

We propose two new techniques for training Generative Adversarial Networks (GANs) in the unsupervised setting. Our objectives are to alleviate mode collapse in GAN and improve the quality of the generated samples. First, we propose neighbor embedding, a manifold learning-based regularization to explicitly retain local structures of latent samples in the generated samples. This prevents generator from producing nearly identical data samples from different latent samples, and reduces mode collapse. We propose an inverse t-SNE regularizer to achieve this. Second, we propose a new technique, gradient matching, to align the distributions of the generated samples and the real samples. As it is challenging to work with high-dimensional sample distributions, we propose to align these distributions through the scalar discriminator scores. We constrain the difference between the discriminator scores of the real samples and generated ones. We further constrain the difference between the gradients of these discriminator scores. We derive these constraints from Taylor approximations of the discriminator function. We perform experiments to demonstrate that our proposed techniques are computationally simple and easy to be incorporated in existing systems. When Gradient matching and Neighbour embedding are applied together, our GN-GAN achieves outstanding results on 1D/2D synthetic, CIFAR-10 and STL-10 datasets, e.g. FID score of 30.80 for the STL-10 dataset. Our code is available at: https://github.com/tntrung/gan

Downloads

Published

2019-07-17

How to Cite

Tran, N.-T., Bui, T.-A., & Cheung, N.-M. (2019). Improving GAN with Neighbors Embedding and Gradient Matching. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5191-5198. https://doi.org/10.1609/aaai.v33i01.33015191

Issue

Section

AAAI Technical Track: Machine Learning