Published:
2019-10-21
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7
Volume
Issue:
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
Track:
Technical Papers
Downloads:
Abstract:
While there have been many proposals on making AI algorithms explainable, few have attempted to evaluate the impact of AI-generated explanations on human performance in conducting human-AI collaborative tasks. To bridge the gap, we propose a Twenty-Questions style collaborative image retrieval game, Explanation-assisted Guess Which (ExAG), as a method of evaluating the efficacy of explanations (visual evidence or textual justification) in the context of Visual Question Answering (VQA). In our proposed ExAG, a human user needs to guess a secret image picked by the VQA agent by asking natural language questions to it. We show that overall, when AI explains its answers, users succeed more often in guessing the secret image correctly. Notably, a few correct explanations can readily improve human performance when VQA answers are mostly incorrect as compared to no-explanation games. Furthermore, we also show that while explanations rated as “helpful” significantly improve human performance, “incorrect” and “unhelpful” explanations can degrade performance as compared to no-explanation games. Our experiments, therefore, demonstrate that ExAG is an effective means to evaluate the efficacy of AI-generated explanation on a human-AI collaborative task.
DOI:
10.1609/hcomp.v7i1.5275
HCOMP
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-820-6