Published:
2013-11-10
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1
Volume
Issue:
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
Track:
Works in Progress
Downloads:
Abstract:
We have developed a method for accurately inferring true labels from labels provided by crowdsourcing workers, with the aid of self-reported confidence judgments in their labels. Although confidence judgments can be useful information for estimating the quality of the provided labels, some workers are overconfident about the quality of their labels while others are underconfident. To address this problem, we extended the Dawid-Skene model and created a probabilistic model that considers the differences among workers in their accuracy of confidence judgments. Results of experiments using actual crowdsourced data showed that incorporating workers' confidence judgments can improve the accuracy of inferred labels.
DOI:
10.1609/hcomp.v1i1.13113
HCOMP
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-607-3