AAAI Publications, Thirty-First AAAI Conference on Artificial Intelligence

Font Size: 
Webly-Supervised Learning of Multimodal Video Detectors
Junwei Liang, Lu Jiang, Alexander Hauptmann

Last modified: 2017-02-12


Given any complicated or specialized video content search query, e.g. ”Batkid (a kid in batman costume)” or ”destroyed buildings”, existing methods require manually labeled data to build detectors for searching. We present a demonstration of an artificial intelligence application, Webly-labeled Learning (WELL) that enables learning of ad-hoc concept detectors over unlimited Internet videos without any manual an-notations. A considerable number of videos on the web are associated with rich but noisy contextual information, such as the title, which provides a type of weak annotations or la-bels of the video content. To leverage this information, our system employs state-of-the-art webly-supervised learning(WELL) (Liang et al. ). WELL considers multi-modal information including deep learning visual, audio and speech features, to automatically learn accurate video detectors based on the user query. The learned detectors from a large number of web videos allow users to search relevant videos over their personal video archives, not requiring any textual metadata,but as convenient as searching on Youtube.


Video Analysis;Webly-supervised Learning;Video Classification

Full Text: PDF