Context Modulated Dynamic Networks for Actor and Action Video Segmentation with Language Queries

Authors

  • Hao Wang Xidian University
  • Cheng Deng Xidian University
  • Fan Ma University of Technology Sydney
  • Yi Yang University of Technology Sydney

DOI:

https://doi.org/10.1609/aaai.v34i07.6895

Abstract

Actor and action video segmentation with language queries aims to segment out the expression referred objects in the video. This process requires comprehensive language reasoning and fine-grained video understanding. Previous methods mainly leverage dynamic convolutional networks to match visual and semantic representations. However, the dynamic convolution neglects spatial context when processing each region in the frame and is thus challenging to segment similar objects in the complex scenarios. To address such limitation, we construct a context modulated dynamic convolutional network. Specifically, we propose a context modulated dynamic convolutional operation in the proposed framework. The kernels for the specific region are generated from both language sentences and surrounding context features. Moreover, we devise a temporal encoder to incorporate motions into the visual features to further match the query descriptions. Extensive experiments on two benchmark datasets, Actor-Action Dataset Sentences (A2D Sentences) and J-HMDB Sentences, demonstrate that our proposed approach notably outperforms state-of-the-art methods.

Downloads

Published

2020-04-03

How to Cite

Wang, H., Deng, C., Ma, F., & Yang, Y. (2020). Context Modulated Dynamic Networks for Actor and Action Video Segmentation with Language Queries. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12152-12159. https://doi.org/10.1609/aaai.v34i07.6895

Issue

Section

AAAI Technical Track: Vision