Toward a Comprehension Challenge, Using Crowdsourcing as a Tool

Authors

  • Praveen Paritosh Google, Inc.
  • Gary Marcus New York University

DOI:

https://doi.org/10.1609/aimag.v37i1.2649

Abstract

Human readers comprehend vastly more, and in vastly different ways, than any existing comprehension test would suggest. An ideal comprehension test for a story should cover the full range of questions and answers that humans would expect other humans to reasonably learn or infer from a given story. As a step toward these goals we propose a novel test, the Crowdsourced Comprehension Challenge (C3), which is constructed by repeated runs of a three-person game, the Iterative Crowdsourced Comprehension Game (ICCG). ICCG uses structured crowdsourcing to comprehensively generate relevant questions and supported answers for arbitrary stories, whether fiction or nonfiction, presented across a variety of media such as videos, podcasts, and still images.

Downloads

Published

2016-04-13

How to Cite

Paritosh, P., & Marcus, G. (2016). Toward a Comprehension Challenge, Using Crowdsourcing as a Tool. AI Magazine, 37(1), 23-30. https://doi.org/10.1609/aimag.v37i1.2649

Issue

Section

Articles