Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
Work quality in crowdsourcing task sessions can change over time due to both internal factors, such as learning and boredom, and external factors like the provision of monetary interventions. Prior studies on crowd work quality have focused on characterizing the temporal behavior pattern as a result of the internal factors. In this paper, we propose to explicitly take the impact of external factors into consideration for modeling crowd work quality. We present a series of seven models from three categories (supervised learning models, autoregressive models and Markov models) and conduct an empirical comparison on how well these models can predict crowd work quality under monetary interventions on three datasets that are collected from Amazon Mechanical Turk. Our results show that all these models outperform the baseline models that don’t consider the impact of monetary interventions. Our empirical comparison further identifies the random forests model as an excellent model to use in practice as it consistently provides accurate predictions with high confidence across different datasets, and it also demonstrates robustness against limited training data and limited access to the ground truth.
DOI:
10.1609/hcomp.v4i1.13282
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2