AAAI Publications, Sixth AAAI Conference on Human Computation and Crowdsourcing

Font Size: 
An Empirical Study on Short- and Long-Term Effects of Self-Correction in Crowdsourced Microtasks
Masaki Kobayashi, Hiromi Morita, Masaki Matsubara, Nobuyuki Shimizu, Atsuyuki Morishima

Last modified: 2018-06-15

Abstract


Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update his/her results according to the review.Self-correction was proposed as an approach complementary to statistical algorithms in which workers independently perform the same task. It can provide higher-quality results with few additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are needed. In addition, as self-correction gives feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks. This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.The empirical results show the following: (1) Self-correction is effective for making workers reconsider their judgments. (2) Self-correction is more effective if workers are shown task results produced by higher-quality workers during the second stage. (3) Perceptual learning effect is observed in some cases. Self-correction can give feedback that shows workers how to provide high-quality answers in future tasks.The findings imply that we can construct a positive loop to improve the quality of workers effectively.We also analyze in which cases perceptual learning can be observed with self-correction in crowdsourced microtasks.

Keywords


Crowdsourcing; Quality Control; Crowd Worker

Full Text: PDF