Published:
2018-07-09
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 6
Volume
Issue:
Vol. 6 (2018): Sixth AAAI Conference on Human Computation and Crowdsourcing
Track:
Technical Papers
Downloads:
Abstract:
One of the most popular quality assurance mechanisms in paid micro-task crowdsourcing is based on gold questions: the use of a small set of tasks of which the requester knows the correct answer and, thus, is able to directly assess crowd work quality. In this paper, we show that such mechanism is prone to an attack carried out by a group of colluding crowd workers that is easy to implement and deploy: the inherent size limit of the gold set can be exploited by building an inferential system to detect which parts of the job are more likely to be gold questions. The described attack is robust to various forms of randomisation and programmatic generation of gold questions. We present the architecture of the proposed system, composed of a browser plug-in and an external server used to share information, and briefly introduce its potential evolution to a decentralised implementation. We implement and experimentally validate the gold detection system, using real-world data from a popular crowdsourcing platform. Finally, we discuss the economic and sociological implications of this kind of attack.
DOI:
10.1609/hcomp.v6i1.13332
HCOMP
Vol. 6 (2018): Sixth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-799-5