Published:
2014-11-05
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2
Volume
Issue:
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
Track:
Works in Progress
Downloads:
Abstract:
We consider the crowdsourcing task of learning the answer to simple multiple-choice microtasks. In order to provide statistically significant results, one often needs to ask multiple workers to answer the same microtask. A stopping rule is an algorithm that for a given microtask decides for any given set of worker answers if the system should stop and output an answer or iterate and ask one more worker. A quality score for a worker is a score that reflects the historic performance of that worker. In this paper we investigate how to devise better stopping rules given such quality scores. We conduct a data analysis on a large-scale industrial crowdsourcing platform, and use the observations from this analysis to design new stopping rules that use the workers’ quality scores in a non-trivial manner. We then conduct a simulation based on a real-world workload, showing that our algorithm performs better than the more naive approaches.
DOI:
10.1609/hcomp.v2i1.13201
HCOMP
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-682-0