Global Context-Aware Progressive Aggregation Network for Salient Object Detection

Authors

  • Zuyao Chen University of Chinese Academy of Sciences
  • Qianqian Xu Chinese Academy of Sciences
  • Runmin Cong Beijing Jiaotong University
  • Qingming Huang University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v34i07.6633

Abstract

Deep convolutional neural networks have achieved competitive performance in salient object detection, in which how to learn effective and comprehensive features plays a critical role. Most of the previous works mainly adopted multiple-level feature integration yet ignored the gap between different features. Besides, there also exists a dilution process of high-level features as they passed on the top-down pathway. To remedy these issues, we propose a novel network named GCPANet to effectively integrate low-level appearance features, high-level semantic features, and global context features through some progressive context-aware Feature Interweaved Aggregation (FIA) modules and generate the saliency map in a supervised way. Moreover, a Head Attention (HA) module is used to reduce information redundancy and enhance the top layers features by leveraging the spatial and channel-wise attention, and the Self Refinement (SR) module is utilized to further refine and heighten the input features. Furthermore, we design the Global Context Flow (GCF) module to generate the global context information at different stages, which aims to learn the relationship among different salient regions and alleviate the dilution effect of high-level features. Experimental results on six benchmark datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.

Downloads

Published

2020-04-03

How to Cite

Chen, Z., Xu, Q., Cong, R., & Huang, Q. (2020). Global Context-Aware Progressive Aggregation Network for Salient Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10599-10606. https://doi.org/10.1609/aaai.v34i07.6633

Issue

Section

AAAI Technical Track: Vision