Weakly Supervised Scene Parsing with Point-Based Distance Metric Learning

  • Rui Qian Peking University
  • Yunchao Wei University of Illinois, Urbana Champaign
  • Honghui Shi IBM Research
  • Jiachen Li University of Illinois, Urbana-Champaign
  • Jiaying Liu Peking University
  • Thomas Huang University of Illinois, Urbana Champaign

Abstract

Semantic scene parsing is suffering from the fact that pixellevel annotations are hard to be collected. To tackle this issue, we propose a Point-based Distance Metric Learning (PDML) in this paper. PDML does not require dense annotated masks and only leverages several labeled points that are much easier to obtain to guide the training process. Concretely, we leverage semantic relationship among the annotated points by encouraging the feature representations of the intra- and intercategory points to keep consistent, i.e. points within the same category should have more similar feature representations compared to those from different categories. We formulate such a characteristic into a simple distance metric loss, which collaborates with the point-wise cross-entropy loss to optimize the deep neural networks. Furthermore, to fully exploit the limited annotations, distance metric learning is conducted across different training images instead of simply adopting an image-dependent manner. We conduct extensive experiments on two challenging scene parsing benchmarks of PASCALContext and ADE 20K to validate the effectiveness of our PDML, and competitive mIoU scores are achieved.

Published
2019-07-17
Section
AAAI Technical Track: Vision