Detecting Semantic Anomalies

Authors

  • Faruk Ahmed Université de Montréal
  • Aaron Courville Université de Montréal

DOI:

https://doi.org/10.1609/aaai.v34i04.5712

Abstract

We critically appraise the recent interest in out-of-distribution (OOD) detection and question the practical relevance of existing benchmarks. While the currently prevalent trend is to consider different datasets as OOD, we argue that out-distributions of practical interest are ones where the distinction is semantic in nature for a specified context, and that evaluative tasks should reflect this more closely. Assuming a context of object recognition, we recommend a set of benchmarks, motivated by practical applications. We make progress on these benchmarks by exploring a multi-task learning based approach, showing that auxiliary objectives for improved semantic awareness result in improved semantic anomaly detection, with accompanying generalization benefits.

Downloads

Published

2020-04-03

How to Cite

Ahmed, F., & Courville, A. (2020). Detecting Semantic Anomalies. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3154-3162. https://doi.org/10.1609/aaai.v34i04.5712

Issue

Section

AAAI Technical Track: Machine Learning