AAAI Publications, Fifth AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk
Meng-Han Wu, Alexander James Quinn

Last modified: 2017-09-21

Abstract


Task instruction quality is widely presumed to affect outcomes, such as accuracy, throughput, trust, and worker satisfaction. Best practices guides written by experienced requesters share their advice about how to craft task interfaces. However, there is little evidence of how specific task design attributes affect actual outcomes. This paper presents a set of studies that expose the relationship between three sets of measures: (a) workers’ perceptions of task quality, (b) adherence to popular best practices, and (c) actual outcomes when tasks are posted (including accuracy, throughput, trust, and worker satisfaction). These were investigated using collected task interfaces, along with a model task that we systematically mutated to test the effects of specific task design guidelines.

Keywords


crowdsourcing; human computation; human-computer interaction

Full Text: PDF