AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows

Authors

  • Aditya Grover Stanford
  • Christopher Chute Stanford
  • Rui Shu Stanford
  • Zhangjie Cao Stanford
  • Stefano Ermon Stanford

DOI:

https://doi.org/10.1609/aaai.v34i04.5820

Abstract

Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain. Variants of this problem have been studied in many contexts, such as cross-domain translation and domain adaptation. We propose AlignFlow, a generative modeling framework that models each domain via a normalizing flow. The use of normalizing flows allows for a) flexibility in specifying learning objectives via adversarial training, maximum likelihood estimation, or a hybrid of the two methods; and b) learning and exact inference of a shared representation in the latent space of the generative model. We derive a uniform set of conditions under which AlignFlow is marginally-consistent for the different learning objectives. Furthermore, we show that AlignFlow guarantees exact cycle consistency in mapping datapoints from a source domain to target and back to the source domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image translation and unsupervised domain adaptation and can be used to simultaneously interpolate across the various domains using the learned representation.

Downloads

Published

2020-04-03

How to Cite

Grover, A., Chute, C., Shu, R., Cao, Z., & Ermon, S. (2020). AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4028-4035. https://doi.org/10.1609/aaai.v34i04.5820

Issue

Section

AAAI Technical Track: Machine Learning