Towards Consistent Variational Auto-Encoding (Student Abstract)

Authors

  • Yijing Liu Beijing University of Posts and Telecommunications
  • Shuyu Lin University of Oxford
  • Ronald Clark University of Oxford

DOI:

https://doi.org/10.1609/aaai.v34i10.7207

Abstract

Variational autoencoders (VAEs) have been a successful approach to learning meaningful representations of data in an unsupervised manner. However, suboptimal representations are often learned because the approximate inference model fails to match the true posterior of the generative model, i.e. an inconsistency exists between the learnt inference and generative models. In this paper, we introduce a novel consistency loss that directly requires the encoding of the reconstructed data point to match the encoding of the original data, leading to better representations. Through experiments on MNIST and Fashion MNIST, we demonstrate the existence of the inconsistency in VAE learning and that our method can effectively reduce such inconsistency.

Downloads

Published

2020-04-03

How to Cite

Liu, Y., Lin, S., & Clark, R. (2020). Towards Consistent Variational Auto-Encoding (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10), 13869-13870. https://doi.org/10.1609/aaai.v34i10.7207

Issue

Section

Student Abstract Track