In this paper, we propose a novel structure for a multi-modal data association referred to as Associative Variational Auto-Encoder (AVAE). In contrast to the existing models using a shared latent space among modalities, our structure adopts distributed latent spaces for multi-modalities which are connected through cross-modal associators. The proposed structure successfully associates even heterogeneous modality data and easily incorporates the additional modality to the entire network via the associator. Furthermore, in our structure, only a small amount of supervised (paired) data is enough to train associators after training auto-encoders in an unsupervised manner. Through experiments, the effectiveness of the proposed structure is validated on various datasets including visual and auditory data.
Published Date: 2020-06-02
Registration: ISSN 2374-3468 (Online) ISSN 2159-5399 (Print) ISBN 978-1-57735-835-0 (10 issue set)
Copyright: Published by AAAI Press, Palo Alto, California USA Copyright © 2020, Association for the Advancement of Artificial Intelligence All Rights Reserved