Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification

  • Guan-An Wang Institute of Automation, Chinese Academy of Sciences
  • Tianzhu Zhang University of Science and Technology of China
  • Yang Yang Institute of Automation, Chinese Academy of Sciences
  • Jian Cheng Institute of Automation, Chinese Academy of Sciences
  • Jianlong Chang Institute of Automation, Chinese Academy of Sciences
  • Xu Liang Institute of Automation, Chinese Academy of Sciences
  • Zeng-Guang Hou Institute of Automation, Chinese Academy of Sciences

Abstract

RGB-Infrared (IR) person re-identification is very challenging due to the large cross-modality variations between RGB and IR images. The key solution is to learn aligned features to the bridge RGB and IR modalities. However, due to the lack of correspondence labels between every pair of RGB and IR images, most methods try to alleviate the variations with set-level alignment by reducing the distance between the entire RGB and IR sets. However, this set-level alignment may lead to misalignment of some instances, which limits the performance for RGB-IR Re-ID. Different from existing methods, in this paper, we propose to generate cross-modality paired-images and perform both global set-level and fine-grained instance-level alignments. Our proposed method enjoys several merits. First, our method can perform set-level alignment by disentangling modality-specific and modality-invariant features. Compared with conventional methods, ours can explicitly remove the modality-specific features and the modality variation can be better reduced. Second, given cross-modality unpaired-images of a person, our method can generate cross-modality paired images from exchanged images. With them, we can directly perform instance-level alignment by minimizing distances of every pair of images. Extensive experimental results on two standard benchmarks demonstrate that the proposed model favourably against state-of-the-art methods. Especially, on SYSU-MM01 dataset, our model can achieve a gain of 9.2% and 7.7% in terms of Rank-1 and mAP. Code is available at https://github.com/wangguanan/JSIA-ReID.

Published
2020-04-03
Section
AAAI Technical Track: Vision