Published:
2015-11-12
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3
Volume
Issue:
Vol. 3 (2015): Third AAAI Conference on Human Computation and Crowdsourcing
Track:
Crowdsourcing Breakthroughs for Language Technology Applications Workshop
Downloads:
Abstract:
Automated systems that aid in the development of Multiple Choice Questions (MCQs) have value for both educators, who spend large amounts of time creating novel questions, and students, who spend a great deal of effort both practicing for and taking tests. The current approach for measuring question difficulty in MCQs relies on models of how good pupils will perform and contrasts that with their lower-performing peers. MCQs can be difficult in many ways. This paper looks specifically at the effect of both the number of words in the question stem and in the answer options on question difficulty. This work is based on the hypothesis that questions are more difficult if the stem of the question and the answer options are semantically far apart. This hypothesis can be normalized, in part, with an analysis of the length of texts being compared. The MCQs used in the experiments were voluntarily authored by university students in biology courses. Future work includes additional experiments utilizing other aspects of this extensive crowdsourced data set.
DOI:
10.1609/hcomp.v3i1.13268
HCOMP
Vol. 3 (2015): Third AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-740-7