FACT: Fused Attention for Clothing Transfer with Generative Adversarial Networks

Authors

  • Yicheng Zhang Shanghai Jiao Tong University
  • Lei Li SenseTime
  • Li Song Shanghai Jiao Tong University
  • Rong Xie Shanghai Jiao Tong University
  • Wenjun Zhang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v34i07.6987

Abstract

Clothing transfer is a challenging task in computer vision where the goal is to transfer the human clothing style in an input image conditioned on a given language description. However, existing approaches have limited ability in delicate colorization and texture synthesis with a conventional fully convolutional generator. To tackle this problem, we propose a novel semantic-based Fused Attention model for Clothing Transfer (FACT), which allows fine-grained synthesis, high global consistency and plausible hallucination in images. Towards this end, we incorporate two attention modules based on spatial levels: (i) soft attention that searches for the most related positions in sentences, and (ii) self-attention modeling long-range dependencies on feature maps. Furthermore, we also develop a stylized channel-wise attention module to capture correlations on feature levels. We effectively fuse these attention modules in the generator and achieve better performances than the state-of-the-art method on the DeepFashion dataset. Qualitative and quantitative comparisons against the baselines demonstrate the effectiveness of our approach.

Downloads

Published

2020-04-03

How to Cite

Zhang, Y., Li, L., Song, L., Xie, R., & Zhang, W. (2020). FACT: Fused Attention for Clothing Transfer with Generative Adversarial Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12894-12901. https://doi.org/10.1609/aaai.v34i07.6987

Issue

Section

AAAI Technical Track: Vision