Segregated Temporal Assembly Recurrent Networks for Weakly Supervised Multiple Action Detection

Authors

  • Yunlu Xu Hikvision Research Institute
  • Chengwei Zhang Shanghai Jiaotong University
  • Zhanzhan Cheng Hikvision Research Institute
  • Jianwen Xie Hikvision
  • Yi Niu Hikvision Research Institute
  • Shiliang Pu Hikvision Research Institute
  • Fei Wu Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019070

Abstract

This paper proposes a segregated temporal assembly recurrent (STAR) network for weakly-supervised multiple action detection. The model learns from untrimmed videos with only supervision of video-level labels and makes prediction of intervals of multiple actions. Specifically, we first assemble video clips according to class labels by an attention mechanism that learns class-variable attention weights and thus helps the noise relieving from background or other actions. Secondly, we build temporal relationship between actions by feeding the assembled features into an enhanced recurrent neural network. Finally, we transform the output of recurrent neural network into the corresponding action distribution. In order to generate more precise temporal proposals, we design a score term called segregated temporal gradient-weighted class activation mapping (ST-GradCAM) fused with attention weights. Experiments on THUMOS’14 and ActivityNet1.3 datasets show that our approach outperforms the state-of-theart weakly-supervised method, and performs at par with the fully-supervised counterparts.

Downloads

Published

2019-07-17

How to Cite

Xu, Y., Zhang, C., Cheng, Z., Xie, J., Niu, Y., Pu, S., & Wu, F. (2019). Segregated Temporal Assembly Recurrent Networks for Weakly Supervised Multiple Action Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9070-9078. https://doi.org/10.1609/aaai.v33i01.33019070

Issue

Section

AAAI Technical Track: Vision