Weakly-Supervised Video Re-Localization with Multiscale Attention Model

Authors

  • Yung-Han Huang National Taiwan University
  • Kuang-Jui Hsu Qualcomm
  • Shyh-Kang Jeng National Taiwan University
  • Yen-Yu Lin National Chiao Tung University

DOI:

https://doi.org/10.1609/aaai.v34i07.6763

Abstract

Video re-localization aims to localize a sub-sequence, called target segment, in an untrimmed reference video that is similar to a given query video. In this work, we propose an attention-based model to accomplish this task in a weakly supervised setting. Namely, we derive our CNN-based model without using the annotated locations of the target segments in reference videos. Our model contains three modules. First, it employs a pre-trained C3D network for feature extraction. Second, we design an attention mechanism to extract multiscale temporal features, which are then used to estimate the similarity between the query video and a reference video. Third, a localization layer detects where the target segment is in the reference video by determining whether each frame in the reference video is consistent with the query video. The resultant CNN model is derived based on the proposed co-attention loss which discriminatively separates the target segment from the reference video. This loss maximizes the similarity between the query video and the target segment while minimizing the similarity between the target segment and the rest of the reference video. Our model can be modified to fully supervised re-localization. Our method is evaluated on a public dataset and achieves the state-of-the-art performance under both weakly supervised and fully supervised settings.

Downloads

Published

2020-04-03

How to Cite

Huang, Y.-H., Hsu, K.-J., Jeng, S.-K., & Lin, Y.-Y. (2020). Weakly-Supervised Video Re-Localization with Multiscale Attention Model. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11077-11084. https://doi.org/10.1609/aaai.v34i07.6763

Issue

Section

AAAI Technical Track: Vision