AAAI Publications, Third AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Identifying and Accounting for Task-Dependent Bias in Crowdsourcing
Ece Kamar, Ashish Kapoor, Eric Horvitz

Last modified: 2015-09-23


Models for aggregating contributions by crowd workers have been shown to be challenged by the rise of task-specific biases or errors. Task-dependent errors in assessment may shift the majority opinion of even large numbers of workers to an incorrect answer. We introduce and evaluate probabilistic models that can detect and correct task-dependent bias automatically. First, we show how to build and use probabilistic graphical models for jointly modeling task features, workers' biases, worker contributions and ground truth answers of tasks so that task-dependent bias can be corrected. Second, we show how the approach can perform a type of transfer learning among workers to address the issue of annotation sparsity. We evaluate the models with varying complexity on a large data set collected from a citizen science project and show that the models are effective at correcting the task-dependent worker bias. Finally, we investigate the use of active learning to guide the acquisition of expert assessments to enable automatic detection and correction of worker bias.


crowdsourcing; human computation; Bayesian graphical models; bias; aggregation models

Full Text: PDF