Published:
2017-10-27
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 5
Volume
Issue:
Vol. 5 (2017): Fifth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
Task instruction quality is widely presumed to affect outcomes, such as accuracy, throughput, trust, and worker satisfaction. Best practices guides written by experienced requesters share their advice about how to craft task interfaces. However, there is little evidence of how specific task design attributes affect actual outcomes. This paper presents a set of studies that expose the relationship between three sets of measures: (a) workers’ perceptions of task quality, (b) adherence to popular best practices, and (c) actual outcomes when tasks are posted (including accuracy, throughput, trust, and worker satisfaction). These were investigated using collected task interfaces, along with a model task that we systematically mutated to test the effects of specific task design guidelines.
DOI:
10.1609/hcomp.v5i1.13317
HCOMP
Vol. 5 (2017): Fifth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-793-3