Published:
2019-10-21
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7
Volume
Issue:
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
Track:
Technical Papers
Downloads:
Abstract:
When predictive models are deployed in the real world, the confidence of a given prediction is often used as a signal of how much it should be trusted. It is therefore critical to identify instances for which the model is highly confident yet incorrect, i.e. the unknown unknowns. We describe a hybrid approach to identifying unknown unknowns that combines the previous crowdsourcing and algorithmic strategies, and addresses some of their weaknesses. In particular, we propose learning a set of interpretable decision rules to approximate how the model makes high confidence predictions. We devise a crowdsourcing task in which workers are presented with a rule, and challenged to generate an instance that “contradicts” it. A bandit algorithm is used to select the most promising rules to present to workers. Our method is evaluated by conducting a user study on Amazon Mechanical Turk. Experimental results on three datasets indicate that our approach discovers unknown unknowns more efficiently than the state-of-the-art.
DOI:
10.1609/hcomp.v7i1.5274
HCOMP
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-820-6