Associative Variational Auto-Encoder with Distributed Latent Spaces and Associators

Authors

  • Dae Ung Jo Seoul National University
  • ByeongJu Lee Seoul National University
  • Jongwon Choi Samsung SDS
  • Haanju Yoo Samsung Research
  • Jin Young Choi Seoul National University

DOI:

https://doi.org/10.1609/aaai.v34i07.6778

Abstract

In this paper, we propose a novel structure for a multi-modal data association referred to as Associative Variational Auto-Encoder (AVAE). In contrast to the existing models using a shared latent space among modalities, our structure adopts distributed latent spaces for multi-modalities which are connected through cross-modal associators. The proposed structure successfully associates even heterogeneous modality data and easily incorporates the additional modality to the entire network via the associator. Furthermore, in our structure, only a small amount of supervised (paired) data is enough to train associators after training auto-encoders in an unsupervised manner. Through experiments, the effectiveness of the proposed structure is validated on various datasets including visual and auditory data.

Downloads

Published

2020-04-03

How to Cite

Jo, D. U., Lee, B., Choi, J., Yoo, H., & Choi, J. Y. (2020). Associative Variational Auto-Encoder with Distributed Latent Spaces and Associators. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11197-11204. https://doi.org/10.1609/aaai.v34i07.6778

Issue

Section

AAAI Technical Track: Vision