Published:
2014-11-05
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2
Volume
Issue:
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
Track:
Research Papers
Downloads:
Abstract:
The recent advent of human computation -- employing non-experts to solve problems -- has inspired theoretical work in mechanism design for eliciting information when responses cannot be verified. We study a popular practical method, output agreement, from a theoretical perspective. In output agreement, two agents are given the same inputs and asked to produce some output; they are scored based on how closely their responses agree. Although simple, output agreement raises new conceptual questions. Primary is the fundamental importance of common knowledge: We show that, rather than being truthful, output agreement mechanisms elicit common knowledge from participants. We show that common knowledge is essentially the best that can be hoped for in any mechanism without verification unless there are restrictions on the information structure. This involves generalizing truthfulness to include responding to a query rather than simply reporting a private signal, along with a notion of common-knowledge equilibria. A final important issue raised by output agreement is focal equilibria and player computation of equilibria. We show that, for eliciting the mean of a random variable, a natural player inference process converges to the common-knowledge equilibrium; but this convergence may not occur for other types of queries.
DOI:
10.1609/hcomp.v2i1.13151
HCOMP
Vol. 2 (2014): Second AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-682-0