AAAI Publications, First AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Reducing Error in Context-Sensitive Crowdsourced Tasks
Daniel Haas, Matthew Greenstein, Kainar Kamalov, Adam Marcus, Marek Olszewski, Marc Piette

Last modified: 2013-11-03


Most research in quality control in crowdsourced workflows has focused on microtasks, wherein quality can be improved by assigning tasks to multiple workers and interpreting the output as a function of workers' agreement. Not all work fits into microtask frameworks, however, especially work that requires significant training or time per task. In such a context-heavy crowd work system with limited budget for task redundancy, we propose three novel techniques for reducing task error: (1) A self-policing crowd hierarchy in which trusted workers review, correct, and improve entry-level workers' output (2) predictive modeling of task error that improves data quality through targeted redundancy, and (3) holistic modeling of worker performance that supports crowd management strategies designed to improve average crowd worker quality and allocate training to the workers that need the most assistance.


quality control; machine learning; beyond microtasks;

Full Text: PDF