Proceedings:
No. 1: Thirty-First AAAI Conference On Artificial Intelligence
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 31
Track:
Demonstrations
Downloads:
Abstract:
Given any complicated or specialized video content search query, e.g. ÓBatkid (a kid in batman costume)Ó or Ódestroyed buildingsÓ, existing methods require manually labeled data to build detectors for searching. We present a demonstration of an artificial intelligence application, Webly-labeled Learning (WELL) that enables learning of ad-hoc concept detectors over unlimited Internet videos without any manual an-notations. A considerable number of videos on the web are associated with rich but noisy contextual information, such as the title, which provides a type of weak annotations or la-bels of the video content. To leverage this information, our system employs state-of-the-art webly-supervised learning(WELL) (Liang et al. ). WELL considers multi-modal information including deep learning visual, audio and speech features, to automatically learn accurate video detectors based on the user query. The learned detectors from a large number of web videos allow users to search relevant videos over their personal video archives, not requiring any textual metadata,but as convenient as searching on Youtube.
DOI:
10.1609/aaai.v31i1.10541
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 31