Published:
2013-11-10
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1
Volume
Issue:
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
Track:
Scaling Speech, Language Understanding and Dialogue through Crowdsourcing Workshop
Downloads:
Abstract:
Crowds of people can potentially solve some problems faster than individuals. Crowd sourced data can be leveraged to benefit the crowd by providing information or solutions faster than traditional means. Many tasks needed for developing dialogue systems such as annotation can benefit from crowdsourcing as well. We investigate how to outsource dialogue data annotation through Amazon Mechanical Turk. We are in particular interested in empirically analyzing how much context from previous parts of the dialogue (e.g. previous dialogue turns) is needed to be provided before the target part (dialogue turn) is presented to the annotator. The answer to this question is essentially important for leveraging crowd sourced data for appropriate and efficient response and coordination. We study the effect of presenting different numbers of previous data (turns) to the Turkers in annotating sentiments of dyadic negotiation dialogs on the inter annotator reliability and comparison to the gold standard.
DOI:
10.1609/hcomp.v1i1.13094
HCOMP
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-607-3