Multi-Instance Multi-Label Action Recognition and Localization Based on Spatio-Temporal Pre-Trimming for Untrimmed Videos

Authors

  • Xiao-Yu Zhang Institute of Information Engineering, Chinese Academy of Sciences
  • Haichao Shi Institute of Information Engineering, Chinese Academy of Sciences
  • Changsheng Li School of Computer Science and Technology, Beijing Institute of Technology
  • Peng Li China University of Petroleum (East China)

DOI:

https://doi.org/10.1609/aaai.v34i07.6986

Abstract

Weakly supervised action recognition and localization for untrimmed videos is a challenging problem with extensive applications. The overwhelming irrelevant background contents in untrimmed videos severely hamper effective identification of actions of interest. In this paper, we propose a novel multi-instance multi-label modeling network based on spatio-temporal pre-trimming to recognize actions and locate corresponding frames in untrimmed videos. Motivated by the fact that person is the key factor in a human action, we spatially and temporally segment each untrimmed video into person-centric clips with pose estimation and tracking techniques. Given the bag-of-instances structure associated with video-level labels, action recognition is naturally formulated as a multi-instance multi-label learning problem. The network is optimized iteratively with selective coarse-to-fine pre-trimming based on instance-label activation. After convergence, temporal localization is further achieved with local-global temporal class activation map. Extensive experiments are conducted on two benchmark datasets, i.e. THUMOS14 and ActivityNet1.3, and experimental results clearly corroborate the efficacy of our method when compared with the state-of-the-arts.

Downloads

Published

2020-04-03

How to Cite

Zhang, X.-Y., Shi, H., Li, C., & Li, P. (2020). Multi-Instance Multi-Label Action Recognition and Localization Based on Spatio-Temporal Pre-Trimming for Untrimmed Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12886-12893. https://doi.org/10.1609/aaai.v34i07.6986

Issue

Section

AAAI Technical Track: Vision