AAAI Publications, Second AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
Scaling-Up the Crowd: Micro-Task Pricing Schemes for Worker Retention and Latency Improvement
Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Philippe Cudré-Mauroux

Last modified: 2014-09-05


Retaining workers on micro-task crowdsourcing platforms is essential in order to guarantee the timely completion of batches of Human Intelligence Tasks (HITs). Worker retention is also a necessary condition for the introduction of SLAs on crowdsourcing platforms. In this paper, we introduce novel pricing schemes aimed at improving the retention rate of workers working on long batches of similar tasks. We show how increasing or decreasing the monetary reward over time influences the number of tasks a worker is willing to complete in a batch, as well as how it influences the overall latency. We compare our new pricing schemes against traditional pricing methods (e.g., constant reward for all the HITs in a batch) and empirically show how certain schemes effectively function as an incentive for workers to keep working longer on a given batch of HITs. Our experimental results show that the best pricing scheme in terms of worker retention is based on punctual bonuses paid whenever the workers reach predefined milestones.


Crowdsourcing; Latency; Retention


Arthur, D. 2001. The employee recruitment and retention handbook.

AMACOM Div American Mgmt Assn.

Bartlett, C., and Ghoshal, S. 2013. Building competitive advantage

through people. Sloan Mgmt. Rev 43(2).

Carvalho, V. R.; Lease, M.; and Yilmaz, E. 2011. Crowdsourcing

for search evaluation. In ACM Sigir forum, volume 44, 17–

22. ACM.

Chandler, D., and Horton, J. J. 2011. Labor Allocation in

Paid Crowdsourcing: Experimental Evidence on Positioning,

Nudges and Prices. In Human Computation.

Demartini, G.; Difallah, D. E.; and Cudr´e-Mauroux, P. 2012.

Zencrowd: leveraging probabilistic reasoning and crowdsourcing

techniques for large-scale entity linking. In Proceedings of

the 21st international conference onWorldWideWeb, 469–478.


Demartini, G.; Difallah, D. E.; and Cudr´e-Mauroux, P. 2013.

Large-scale linked data integration using probabilistic reasoning

and crowdsourcing. The VLDB Journal 22(5):665–687.

Difallah, D. E.; Demartini, G.; and Cudr´e-Mauroux, P. 2013.

Pick-a-crowd: tell me what you like, and i’ll tell you what to do.

In Proceedings of the 22nd international conference on World

Wide Web, 367–374. International World Wide Web Conferences

Steering Committee.

Faradani, S.; Hartmann, B.; and Ipeirotis, P. G. 2011. What’s

the Right Price? Pricing Tasks for Finishing on Time. In Human


Franklin, M. J.; Kossmann, D.; Kraska, T.; Ramesh, S.; and Xin,

R. 2011. CrowdDB: answering queries with crowdsourcing. In

Proceedings of the 2011 ACM SIGMOD International Conference

on Management of data, SIGMOD ’11, 61–72. New York,


Hosseini, M.; Cox, I. J.; Mili´c-Frayling, N.; Kazai, G.; and

Vinay, V. 2012. On aggregating labels from multiple crowd

workers to infer relevance of documents. In Advances in information

retrieval. Springer. 182–194.

Huselid, M. A. 1995. The impact of human resource management

practices on turnover, productivity, and corporate financial

performance. Academy of management journal 38(3):635–672.

Ipeirotis, P. G. 2010. Analyzing the amazon mechanical turk

marketplace. XRDS: Crossroads, The ACM Magazine for Students


Irani, L. C., and Silberman, M. 2013. Turkopticon: Interrupting

worker invisibility in amazon mechanical turk. In Proceedings

of the SIGCHI Conference on Human Factors in Computing

Systems, 611–620. ACM.

Lasecki, W. S.; Marcus, A.; Tzeszotarski, J. M.; and Bigham,

J. P. 2014. Using Microtask Continuity to Improve Crowdsourcing.

In Carnegie Mellon University Human-Computer Interaction

Institute - Technical Reports - CMU-HCII-14-100.

Lazebnik, S.; Schmid, C.; Ponce, J.; et al. 2004. Semi-local

affine parts for object recognition. In British Machine Vision

Conference (BMVC’04), 779–788.

Mao, A.; Kamar, E.; Chen, Y.; Horvitz, E.; Schwamb, M. E.;

Lintott, C. J.; and Smith, A. M. 2013. Volunteering Versus

Work for Pay: Incentives and Tradeoffs in Crowdsourcing. In


Mao, A.; Kamar, E.; and Horvitz, E. 2013. Why Stop Now?

Predicting Worker Engagement in Online Crowdsourcing. In

First AAAI Conference on Human Computation and Crowdsourcing.

Michaels, E.; Handfield-Jones, H.; and Axelrod, B. 2001. The

war for talent. Harvard Business Press.

Ross, J.; Irani, L.; Silberman, M.; Zaldivar, A.; and Tomlinson,

B. 2010. Who are the crowdworkers?: shifting demographics

in mechanical turk. In CHI’10 Extended Abstracts on Human

Factors in Computing Systems, 2863–2872. ACM.

Singer, Y., and Mittal, M. 2013. Pricing Mechanisms for

Crowdsourcing Markets. In Proceedings of the 22Nd International

Conference on World Wide Web, WWW ’13, 1157–

1166. Republic and Canton of Geneva, Switzerland: International

World Wide Web Conferences Steering Committee.

Wang, J.; Kraska, T.; Franklin, M. J.; and Feng, J. 2012. Crowder:

Crowdsourcing entity resolution. Proceedings of the VLDB

Endowment 5(11):1483–1494.

Wang, J.; Ipeirotis, P. G.; and Provost, F. 2013. Quality-Based

Pricing for Crowdsourced Workers. In NYU Stern Research

Working Paper - CBA-13-06.

Yan, T.; Kumar, V.; and Ganesan, D. 2010. Crowdsearch: Exploiting

crowds for accurate real-time image search on mobile

phones. In Proceedings of the 8th International Conference on

Mobile Systems, Applications, and Services, MobiSys ’10, 77–

90. New York, NY, USA: ACM.

Yannakoudakis, H.; Briscoe, T.; and Medlock, B. 2011. A

new dataset and method for automatically grading esol texts. In

Proceedings of the 49th Annual Meeting of the Association for

Computational Linguistics: Human Language Technologies-

Volume 1, 180–189. Association for Computational Linguistics.

Full Text: PDF