Mining on Heterogeneous Manifolds for Zero-Shot Cross-Modal Image Retrieval

Authors

  • Fan Yang The University of Tokyo
  • Zheng Wang National Institute of Informatics
  • Jing Xiao National Institute of Informatics
  • Shin'ichi Satoh National Institute of Informatics

DOI:

https://doi.org/10.1609/aaai.v34i07.6949

Abstract

Most recent approaches for the zero-shot cross-modal image retrieval map images from different modalities into a uniform feature space to exploit their relevance by using a pre-trained model. Based on the observation that manifolds of zero-shot images are usually deformed and incomplete, we argue that the manifolds of unseen classes are inevitably distorted during the training of a two-stream model that simply maps images from different modalities into a uniform space. This issue directly leads to poor cross-modal retrieval performance. We propose a bi-directional random walk scheme to mining more reliable relationships between images by traversing heterogeneous manifolds in the feature space of each modality. Our proposed method benefits from intra-modal distributions to alleviate the interference caused by noisy similarities in the cross-modal feature space. As a result, we achieved great improvement in the performance of the thermal v.s. visible image retrieval task. The code of this paper: https://github.com/fyang93/cross-modal-retrieval

Downloads

Published

2020-04-03

How to Cite

Yang, F., Wang, Z., Xiao, J., & Satoh, S. (2020). Mining on Heterogeneous Manifolds for Zero-Shot Cross-Modal Image Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12589-12596. https://doi.org/10.1609/aaai.v34i07.6949

Issue

Section

AAAI Technical Track: Vision