Most research in quality control in crowdsourced workflows has focused on microtasks, wherein quality can be improved by assigning tasks to multiple workers and interpreting the output as a function of workers' agreement. Not all work fits into microtask frameworks, however, especially work that requires significant training or time per task. In such a context-heavy crowd work system with limited budget for task redundancy, we propose three novel techniques for reducing task error: (1) A self-policing crowd hierarchy in which trusted workers review, correct, and improve entry-level workers' output (2) predictive modeling of task error that improves data quality through targeted redundancy, and (3) holistic modeling of worker performance that supports crowd management strategies designed to improve average crowd worker quality and allocate training to the workers that need the most assistance.
Published Date: 2013-11-10
Registration: ISBN 978-1-57735-607-3