Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
Users of peer production web sites differ greatly in their activity levels.A small minority are engaged contributors, while the vast majority are only casual surfers. The casual users devote little effort to evaluating the site's content and many of them visit the site only once. This churn poses a challenge for sites attempting to gauge user interest in their content. The challenge is especially severe for sites focusing on content with subjective quality, including movies, music, restaurants and items in other cultural markets. A key question is whether content evaluation should use opinions of all users or only the minority who devote significant effort to reviewing content? Using Amazon Mechanical Turk, we experimentally address this question by comparing outcomes for these two approaches. We find that the larger numbers of less informed users more than offset their noisy signals on content quality to provide rapid evaluation. However, such users are systematically biased, and the speed of their assessments comes at the expense of limited collective accuracy.
DOI:
10.1609/hcomp.v4i1.13277
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2