Published:
2015-11-12
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3
Volume
Issue:
Vol. 3 (2015): Third AAAI Conference on Human Computation and Crowdsourcing
Track:
Crowdsourcing Breakthroughs for Language Technology Applications Workshop
Downloads:
Abstract:
Community Question Answering (CQA) websites are a popular tool for internet users to fulfill diverse information needs. Posted questions can be multiple sentences long and span diverse domains. They go beyond factoid questions and can be conversational, opinion-seeking and experiential questions, that might have multiple, potentially conflicting, useful answers from different users. In this paper, we describe a large-scale formative study to collect commonsense properties of questions and answers from 18 diverse communities from stackexchange.com. We collected 50,000 human judgments on 500 question-answer pairs. Commonsense properties are features that humans can extract and characterize reliably by using their commonsense knowledge and native language skills, and no special domain expertise is assumed. We report results and suggestions for designing human computation tasks for collecting commonsense semantic judgments.
DOI:
10.1609/hcomp.v3i1.13267
HCOMP
Vol. 3 (2015): Third AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-740-7