AAAI Publications, Thirty-First AAAI Conference on Artificial Intelligence

Font Size: 
Latent Dirichlet Allocation for Unsupervised Activity Analysis on an Autonomous Mobile Robot
Paul Duckworth, Muhannad Alomari, James Charles, David C. Hogg, Anthony G. Cohn

Last modified: 2017-02-12


For autonomous robots to collaborate on joint tasks with humans they require a shared understanding of an observed scene. We present a method for unsupervised learning of common human movements and activities on an autonomous mobile robot, which generalises and improves on recent results. Our framework encodes multiple qualitative abstractions of RGBD video from human observations and does not require external temporal segmentation. Analogously to information retrieval in text corpora, each human detection is modelled as a random mixture of latent topics. A generative probabilistic technique is used to recover topic distributions over an auto-generated vocabulary of discrete, qualitative spatio-temporal code words. We show that the emergent categories align well with human activities as interpreted by a human. This is a particularly challenging task on a mobile robot due to the varying camera viewpoints which lead to incomplete, partial and occluded human detections.


Unsupervised Learning; Qualitative Spatio-Temporal Representations; Mobile Robotics; Plan and Activity Recognition; Latent Dirichlet Allocation;

Full Text: PDF