Published:
2018-07-09
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 6
Volume
Issue:
Vol. 6 (2018): Sixth AAAI Conference on Human Computation and Crowdsourcing
Track:
Technical Papers
Downloads:
Abstract:
Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update his/her results according to the review.Self-correction was proposed as an approach complementary to statistical algorithms in which workers independently perform the same task. It can provide higher-quality results with few additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are needed. In addition, as self-correction gives feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks. This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.The empirical results show the following: (1) Self-correction is effective for making workers reconsider their judgments. (2) Self-correction is more effective if workers are shown task results produced by higher-quality workers during the second stage. (3) Perceptual learning effect is observed in some cases. Self-correction can give feedback that shows workers how to provide high-quality answers in future tasks.The findings imply that we can construct a positive loop to improve the quality of workers effectively.We also analyze in which cases perceptual learning can be observed with self-correction in crowdsourced microtasks.
DOI:
10.1609/hcomp.v6i1.13324
HCOMP
Vol. 6 (2018): Sixth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-799-5