Track:
All Papers
Downloads:
Abstract:
This paper analyzes the impact of several lexical and grammatical features in automated assessment of students' fine-grained understanding of tutored concepts. Truly effective dialog and pedagogy in Intelligent Tutoring Systems is only achievable when systems are able to understand the detailed relationships between a student's answer and the desired conceptual understanding. We describe a new method for recognizing whether a student's response entails that they understand the concepts being taught. We discuss the need for a finer-grained analysis of answers and describe a new representation for reference answers that addresses these issues, breaking them into detailed facets and annotating their relationships to the student's answer more precisely. Human annotation at this detailed level still results in substantial inter-annotator agreement, 86.0% with a Kappa statistic of 0.724. We present our approach to automatically assess student answers, which involves training machine learning classifiers on features extracted from dependency parses of the reference answer and student's response and features derived from domain-independent linguistic statistics. Our system's performance, 75.5% accuracy within domain and 65.9% out of domain, is encouraging and confirms the feasibility of the approach. Another significant contribution of this work is that the semantic assessment of answers is domain independent.