Learning a Visual Tracker from a Single Movie without Annotation

Authors

  • Lingxiao Yang The Hong Kong Polytechnic University
  • David Zhang The Hong Kong Polytechnic University
  • Lei Zhang The Hong Kong Polytechnic University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019095

Abstract

The recent success of deep network in visual trackers learning largely relies on human labeled data, which are however expensive to annotate. Recently, some unsupervised methods have been proposed to explore the learning of visual trackers without labeled data, while their performance lags far behind the supervised methods. We identify the main bottleneck of these methods as inconsistent objectives between off-line training and online tracking stages. To address this problem, we propose a novel unsupervised learning pipeline which is based on the discriminative correlation filter network. Our method iteratively updates the tracker by alternating between target localization and network optimization. In particular, we propose to learn the network from a single movie, which could be easily obtained other than collecting thousands of video clips or millions of images. Extensive experiments demonstrate that our approach is insensitive to the employed movies, and the trained visual tracker achieves leading performance among existing unsupervised learning approaches. Even compared with the same network trained with human labeled bounding boxes, our tracker achieves similar results on many tracking benchmarks. Code is available at: https://github.com/ZjjConan/UL-Tracker-AAAI2019.

Downloads

Published

2019-07-17

How to Cite

Yang, L., Zhang, D., & Zhang, L. (2019). Learning a Visual Tracker from a Single Movie without Annotation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9095-9102. https://doi.org/10.1609/aaai.v33i01.33019095

Issue

Section

AAAI Technical Track: Vision