Transductive Zero-Shot Learning via Visual Center Adaptation

Authors

  • Ziyu Wan Chinese Academy of Sciences
  • Yan Li Chinese Academy of Sciences
  • Min Yang Chinese Academy of Sciences
  • Junge Zhang Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v33i01.330110059

Abstract

In this paper, we propose a Visual Center Adaptation Method (VCAM) to address the domain shift problem in zero-shot learning. For the seen classes in the training data, VCAM builds an embedding space by learning the mapping from semantic space to some visual centers. While for unseen classes in the test data, the construction of embedding space is constrained by a symmetric Chamfer-distance term, aiming to adapt the distribution of the synthetic visual centers to that of the real cluster centers. Therefore the learned embedding space can generalize the unseen classes well. Experiments on two widely used datasets demonstrate that our model significantly outperforms state-of-the-art methods.

Downloads

Published

2019-07-17

How to Cite

Wan, Z., Li, Y., Yang, M., & Zhang, J. (2019). Transductive Zero-Shot Learning via Visual Center Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 10059-10060. https://doi.org/10.1609/aaai.v33i01.330110059

Issue

Section

Student Abstract Track