Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
We provide an empirical analysis of peer prediction mechanisms, which reward participants for information in settings when there is no ground truth against which to score reports. We simulate the mechanisms on a dataset of three million peer assessments from the edX MOOC platform. We evaluate different mechanisms on score variability, which is connected to fairness, risk aversion, and participant learning. We also assess the magnitude of the incentives to invest effort, and study the effect of participant coordination on low-information signals. We find that the correlated agreement mechanism has lower variation in reward than other mechanisms. A concern is that the gain from exerting effort is relatively low across all mechanisms, due to frequent disagreement between peers. Our conclusions are relevant for crowdsourcing in education as well as other domains.
DOI:
10.1609/hcomp.v4i1.13285
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2