Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
Crowdsourcing is increasingly being used to solve complex tasks that require contributions from groups of individuals. In this paper, we consider the problem of distinguishing workers from idlers (who do not contribute positively) in group-based tasks. We consider a group as our smallest observable unit that can be evaluated and assume no knowledge of individual participant’s contribution. We propose the use of group testing based methods for estimating quality of an individual, based on the performance of teams they have been part of. We further extend these algorithms to identify subsets of workers and give theoretical analysis on size of these subsets. We account for several real-world constraints in our model and present empirical support to our theoretical guarantees by an array of simulation experiments.
DOI:
10.1609/hcomp.v4i1.13272
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2