Published:
2013-11-10
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1
Volume
Issue:
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
Track:
Works in Progress
Downloads:
Abstract:
Assistance in creating high-quality exams would be welcomed by educators who do not have direct access to the proprietary data and methods used by educational testing companies. The current approach for measuring question difficulty relies on models of how good pupils will perform and contrasts that with their lower-performing peers. Inverting this process and allowing educators to test their questions before students answer them will speed up question development and utility. We cover two methods for automatically judging the difficulty and discriminating power of MCQs and how best to build sufficient exams from good questions.
DOI:
10.1609/hcomp.v1i1.13129
HCOMP
Vol. 1 (2013): First AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-607-3