AAAI Publications, Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence

Font Size: 
Describing Spatio-Temporal Relations between Object Volumes in Video Streams
Nouf Al Harbi, Yoshihiko Gotoh

Last modified: 2015-04-01

Abstract


This paper is concerned with extension of AngledCORE-9 by Sokeh, Gould, and Renz, a comprehensive representation of spatial information that can be efficiently extracted from interacting objects present in video using their approximated bounding box. Spatial information is important for identification of relation between multiple objects, hence the work is a step forward for tasks such as semantics content analysis and visual information access. To that end we present an approach to incorporating the spatiotemporal volume of objects into AngledCORE-9. The approach is able to detect, track and segment object volumes from a video stream, based on which spatial information is identified in an efficient manner. Accurate spatial and temporal information can be obtained by precise representation of the shape region and the oriented bounding box. A human action classification task is adopted in order to assess the performance of the approach. The experiment with two challenging datasets indicates that the outcome of this approach is comparable to the state-of-the-art.

Full Text: PDF