Proceedings:
No. 2: AAAI-22 Technical Tracks 2
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 36
Track:
AAAI Technical Track on Computer Vision II
Downloads:
Abstract:
Temporal sentence grounding (TSG) is crucial and fundamental for video understanding. Although existing methods train well-designed deep networks with large amount of data, we find that they can easily forget the rarely appeared cases during training due to the off-balance data distribution, which influences the model generalization and leads to unsatisfactory performance. To tackle this issue, we propose a memory-augmented network, called Memory-Guided Semantic Learning Network (MGSL-Net), that learns and memorizes the rarely appeared content in TSG task. Specifically, our proposed model consists of three main parts: cross-modal interaction module, memory augmentation module, and heterogeneous attention module. We first align the given video-query pair by a cross-modal graph convolutional network, and then utilize memory module to record the cross-modal shared semantic features in the domain-specific persistent memory. During training, the memory slots are dynamically associated with both common and rare cases, alleviating the forgetting issue. In testing, the rare cases can thus be enhanced by retrieving the stored memories, leading to better generalization. At last, the heterogeneous attention module is utilized to integrate the enhanced multi-modal features in both video and query domains. Experimental results on three benchmarks show the superiority of our method on both effectiveness and efficiency, which substantially improves the accuracy not only on the entire dataset but also on the rare cases.
DOI:
10.1609/aaai.v36i2.20058
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 36