Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors

  • Jayaraman J. Thiagarajan Lawrence Livermore National Labs
  • Bindya Venkatesh Arizona State University
  • Prasanna Sattigeri IBM Research AI
  • Peer-Timo Bremer Lawrence Livermore National Labs

Abstract

With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model's behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, existing techniques for obtaining prediction intervals are known to produce unsatisfactory results in at least one of these criteria. To address this challenge, we develop a novel approach for building calibrated estimators. More specifically, we use separate models for prediction and interval estimation, and pose a bi-level optimization problem that allows the former to leverage estimates from the latter through an uncertainty matching strategy. Using experiments in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods, both in terms of model fidelity and calibration error.

Published
2020-04-03
Section
AAAI Technical Track: Machine Learning