Published:
2021-11-14
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 9
Volume
Issue:
Vol. 9 (2021): Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Archival Papers
Downloads:
Abstract:
Crowdsourcing enables the solicitation of forecasts on a variety of prediction tasks from distributed groups of people. How to aggregate the solicited forecasts, which may vary in quality, into an accurate final prediction remains a challenging yet critical question. Studies have found that weighing expert forecasts more in aggregation can improve the accuracy of the aggregated prediction. However, this approach usually requires access to the historical performance data of the forecasters, which are often not available. In this paper, we study the problem of aggregating forecasts without having historical performance data. We propose using peer prediction methods, a family of mechanisms initially designed to truthfully elicit private information in the absence of ground truth verification, to assess the expertise of forecasters, and then using this assessment to improve forecast aggregation. We evaluate our peer-prediction-aided aggregators on a diverse collection of 14 human forecast datasets. Compared with a variety of existing aggregators, our aggregators achieve a significant and consistent improvement on aggregation accuracy measured by the Brier score and the log score. Our results reveal the effectiveness of identifying experts to improve aggregation even without historical data.
DOI:
10.1609/hcomp.v9i1.18946
HCOMP
Vol. 9 (2021): Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-872-5