Semantics-Aligned Representation Learning for Person Re-Identification

Authors

  • Xin Jin University of Science and Technology of China
  • Cuiling Lan Microsoft Research Asia
  • Wenjun Zeng Microsoft Research Asia
  • Guoqiang Wei University of Science and Technology of China
  • Zhibo Chen University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i07.6775

Abstract

Person re-identification (reID) aims to match person images to retrieve the ones with the same identity. This is a challenging task, as the images to be matched are generally semantically misaligned due to the diversity of human poses and capture viewpoints, incompleteness of the visible bodies (due to occlusion), etc. In this paper, we propose a framework that drives the reID network to learn semantics-aligned feature representation through delicate supervision designs. Specifically, we build a Semantics Aligning Network (SAN) which consists of a base network as encoder (SA-Enc) for re-ID, and a decoder (SA-Dec) for reconstructing/regressing the densely semantics aligned full texture image. We jointly train the SAN under the supervisions of person re-identification and aligned texture generation. Moreover, at the decoder, besides the reconstruction loss, we add Triplet ReID constraints over the feature maps as the perceptual losses. The decoder is discarded in the inference and thus our scheme is computationally efficient. Ablation studies demonstrate the effectiveness of our design. We achieve the state-of-the-art performances on the benchmark datasets CUHK03, Market1501, MSMT17, and the partial person reID dataset Partial REID.

Downloads

Published

2020-04-03

How to Cite

Jin, X., Lan, C., Zeng, W., Wei, G., & Chen, Z. (2020). Semantics-Aligned Representation Learning for Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11173-11180. https://doi.org/10.1609/aaai.v34i07.6775

Issue

Section

AAAI Technical Track: Vision