Published:
2019-10-21
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7
Volume
Issue:
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
Track:
Technical Papers
Downloads:
Abstract:
In many online environments, such as massive open online courses and crowdsourcing platforms, many people solve similar complex tasks. As a byproduct of solving these tasks, a pool of artifacts are created that may be able to help others perform better on similar tasks. In this paper, we explore whether work that is naturally done by crowdworkers can be used as examples to help future crowdworkers perform better on similar tasks. We explore this in the context of a product comparison review task, where workers must compare and contrast pairs of similar products. We first show that randomly presenting one or two peer-generated examples does not significantly improve performance on future tasks. In a second experiment, we show that presenting examples that are of sufficiently high quality leads to a statistically significant improvement in performance of future workers on a near transfer task. Moreover, our results suggest that even among high quality examples, there are differences in how effective the examples are, indicating that quality is not a perfect proxy for pedagogical value.
DOI:
10.1609/hcomp.v7i1.5269
HCOMP
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-820-6