Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
We propose techniques that obtain top-k lists of items out of larger itemsets, using human workers to perform comparisons among items. An example application is to short-list a large set of college applications using advanced students as workers. A method that obtains crowdsourced top-k lists has to address several challenges of crowdsourcing: there are constraints in the total number of tasks due to monetary or practical reasons; tasks posted to workers have an inherent limitation on their size; obtaining results from human workers has high latency; workers may disagree on their judgments for the same items or provide wrong results on purpose; and, there can be varying difficulty among tasks of the same size. We describe novel efficient techniques and explore their tolerance to adversarial behavior and the tradeoffs among different measures of performance (latency, expense and quality of results). We empirically evaluate the proposed techniques using simulations as well as real crowds in Amazon Mechanical Turk. A randomized variant of the proposed algorithms achieves significant budget saves, especially for very large itemsets and large top-k lists, with negligible risk of lowering the quality of the output.
DOI:
10.1609/hcomp.v4i1.13281
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2