Published:
2015-11-12
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3
Volume
Issue:
Vol. 3 (2015): Third AAAI Conference on Human Computation and Crowdsourcing
Track:
Works in Progress
Downloads:
Abstract:
Quality control (QC) is an integral part of many crowd- sourcing systems. However, popular QC methods, such as aggregating multiple annotations, filtering workers, or verifying the quality of crowd work, introduce additional costs and delays. We propose a complementary paradigm to these QC methods based on predicting the quality of submitted crowd work. In particular, we pro- pose to predict the quality of a given crowd drawing directly from a crowd worker’s drawing time, number of user clicks, and average time per user click. We focus on the task of drawing the boundary of a single object in an image. To train and test our prediction models, we collected a total of 2,025 crowd-drawn segmentations for 405 familiar everyday images and unfamiliar biomedical images from 90 unique crowd workers. We first evaluated five prediction models learned using different combinations of the three worker behavior cues for all images. Experiments revealed that time per number of user clicks was the most effective cue for predicting segmentation quality. We next inspected the predictive power of models learned using crowd annotations collected for familiar and unfamiliar data independently. Prediction models were significantly more effective for estimating the segmentation quality from crowd worker behavior for familiar image content than unfamiliar image content.
DOI:
10.1609/hcomp.v3i1.13260
HCOMP
Vol. 3 (2015): Third AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-740-7