Published:
2014-11-05
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2
Volume
Issue:
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
Track:
Demonstrations
Downloads:
Abstract:
Crowd workers exhibit varying work patterns, expertise, and quality leading to wide variability in the performance of crowdsourcing platforms. The onus of choosing a suitable platform to post tasks is mostly with the requester, often leading to poor guarantees and unmet requirements due to the dynamism in performance of crowd platforms. Towards this end, we demonstrate CrowdUtility, a statistical modelling based tool for evaluating multiple crowdsourcing platforms and recommending a platform that best suits the requirements of the requester. CrowdUtility uses an online Multi-Armed Bandit framework, to schedule tasks while optimizing platform performance. We demonstrate an end-to end system starting from requirements specification, to platform recommendation, to real-time monitoring.
DOI:
10.1609/hcomp.v2i1.13138
HCOMP
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-682-0