Published:
2016-11-03
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4
Volume
Issue:
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
The current state-of-the-art method for generating educational content, such as math word problems and hints, is manual authoring by domain experts. Unfortunately, this is costly, time consuming, and produces content that lacks diversity. Attempts to automatically address the time and diversity issues through natural language generation still do not produce content that is sufficiently creative and varied. Crowdsourcing is a viable alternative - there has been a great deal of research on leveraging human creativity to solve complex problems, such as user interface design. However, these systems typically decompose complex tasks into subtasks. Writing a single word problem or hint is a small enough problem that it is unclear how to further break it down, but also far more complex than typical microtasks like image labeling. Therefore, it is not obvious how to apply these worker improvement methods or which ones are most effective (if at all). We build upon successful task design factors in prior work and run a series of iterative studies, incrementally adding different worker-support elements. Our results show that successive task designs improved accuracy and creativity.
DOI:
10.1609/hcomp.v4i1.13295
HCOMP
Vol. 4 (2016): Fourth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-774-2