AAAI Publications, Fourth AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Quality Estimation of Workers in Collaborative Crowdsourcing Using Group Testing
Prakhar Ojha, Partha Talukdar

Last modified: 2016-09-21

Abstract


Crowdsourcing is increasingly being used to solve complex tasks that require contributions from groups of individuals. In this paper, we consider the problem of distinguishing workers from idlers (who do not contribute positively) in group-based tasks. We consider a group as our smallest observable unit that can be evaluated and assume no knowledge of individual participant’s contribution. We propose the use of group testing based methods for estimating quality of an individual, based on the performance of teams they have been part of. We further extend these algorithms to identify subsets of workers and give theoretical analysis on size of these subsets. We account for several real-world constraints in our model and present empirical support to our theoretical guarantees by an array of simulation experiments.

Keywords


Group testing; groupsourcing

Full Text: PDF