Published:
2014-11-05
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2
Volume
Issue:
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
Track:
Research Papers
Downloads:
Abstract:
One of the most popular uses of crowdsourcing is to provide training data for supervised machine learning algorithms. Since human annotators often make errors, requesters commonly ask multiple workers to label each example. But is this strategy always the most cost effective use of crowdsourced workers? We argue "No" --- often classifiers can achieve higher accuracies when trained with noisy "unilabeled" data. However, in some cases relabeling is extremely important. We discuss three factors that may make relabeling an effective strategy: classifier expressiveness, worker accuracy, and budget.
DOI:
10.1609/hcomp.v2i1.13167
HCOMP
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-682-0