Learning to Transfer: Unsupervised Domain Translation via Meta-Learning

Authors

  • Jianxin Lin University of Science and Technology of China
  • Yijun Wang University of Science and Technology of China
  • Zhibo Chen University of Science and Technology of China
  • Tianyu He Alibaba

DOI:

https://doi.org/10.1609/aaai.v34i07.6816

Abstract

Unsupervised domain translation has recently achieved impressive performance with Generative Adversarial Network (GAN) and sufficient (unpaired) training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored and the obtained model cannot be adapted to a new coming domain. In this work, we take on unsupervised domain translation problems from a meta-learning perspective. We propose a model called Meta-Translation GAN (MT-GAN) to find good initialization of translation models. In the meta-training procedure, MT-GAN is explicitly trained with a primary translation task and a synthesized dual translation task. A cycle-consistency meta-optimization objective is designed to ensure the generalization ability. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when each domain contains no more than ten training samples.

Downloads

Published

2020-04-03

How to Cite

Lin, J., Wang, Y., Chen, Z., & He, T. (2020). Learning to Transfer: Unsupervised Domain Translation via Meta-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11507-11514. https://doi.org/10.1609/aaai.v34i07.6816

Issue

Section

AAAI Technical Track: Vision