AAAI Publications, Fourth AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System
Danna Gurari, Mehrnoosh Sameki, Margrit Betke

Last modified: 2016-09-21

Abstract


Crowdsourced demarcations of object boundaries in images (segmentations) are important for many vision-based applications. A commonly reported challenge is that a large percentage of crowd results are discarded due to concerns about quality. We conducted three studies to examine (1) how does the quality of crowdsourced segmentations differ for familiar everyday images versus unfamiliar biomedical images?, (2) how does making familiar images less recognizable (rotating images upside down) influence crowd work with respect to the quality of results, segmentation time, and segmentation detail?, and (3) how does crowd workers’ judgments of the ambiguity of the segmentation task, collected by voting, differ for familiar everyday images and unfamiliar biomedical images? We analyzed a total of 2,525 segmentations collected from 121 crowd workers and 1,850 votes from 55 crowd workers. Our results illustrate the potential benefit of explicitly accounting for human familiarity with the data when designing computer interfaces for human interaction.

Keywords


Image Annotation; Crowdsourcing; Data Familiarity

Full Text: PDF