Published:
2013-11-10
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1
Volume
Issue:
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
We develop a mechanism for setting discriminated reward prices in order to group crowd workers according to their abilities. Generally, a worker has a certain level of confidence in the correctness of her answers, and asking about it is useful for estimating the probability of correctness. However, we need to overcome two main obstacles to utilize confidence for inferring correct answers. One is that a worker is not always well-calibrated. Since she is sometimes over/underconfident, her confidence does not always coincide with the probability of correctness. The other is that she does not always truthfully report her confidence. Thus, we design an indirect mechanism that enables a worker to declare her confidence by choosing a desirable reward plan from the set of plans that correspond to different confidence intervals. Our mechanism ensures that choosing a plan including true confidence maximizes the worker's expected utility. We also propose a method that composes a set of plans that can achieve requester-specified accuracy in estimating the correct answer using a small number of workers. We show our experimental results using Amazon Mechanical Turk.
DOI:
10.1609/hcomp.v1i1.13083
HCOMP
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-607-3