Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
Crowdsourced demarcations of object boundaries in images (segmentations) are important for many vision-based applications. A commonly reported challenge is that a large percentage of crowd results are discarded due to concerns about quality. We conducted three studies to examine (1) how does the quality of crowdsourced segmentations differ for familiar everyday images versus unfamiliar biomedical images?, (2) how does making familiar images less recognizable (rotating images upside down) influence crowd work with respect to the quality of results, segmentation time, and segmentation detail?, and (3) how does crowd workers’ judgments of the ambiguity of the segmentation task, collected by voting, differ for familiar everyday images and unfamiliar biomedical images? We analyzed a total of 2,525 segmentations collected from 121 crowd workers and 1,850 votes from 55 crowd workers. Our results illustrate the potential benefit of explicitly accounting for human familiarity with the data when designing computer interfaces for human interaction.
DOI:
10.1609/hcomp.v4i1.13294
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2