Track:
Contents
Downloads:
Abstract:
Recent researchhas focused on detecting the “gaming” behavior of students while using an intelligent tutoring system. For instance, Baker, Corbett and Koedinger (2004) reported detecting “gaming” by students, and argued that it explained lower learning results for these students. In this paper, we report that while our computer system’s correlation with a student’s actual state test score is well correlated (r=.7), we found that we were systematically under-predicting their scores. We wondered if that underprediction had to do with students engaging in some form of gaming. In this paper, we look to see if some of the online metrics (e.g. rate of asking for hints) Baker et al reported correlated with our under-prediction of student’s scores. We report results from the Assistment Project’s data set of about 70 students collected from May, 2004. We performed a stepwise regression to predict what metrics help to explain our poor prediction of state exam scores. We conclude that while none of the metrics we used were statistically significant, several of the metrics were correlated with our under-prediction, suggesting that there is information in these signals but that it might be too weak for the small sample size we have. For future work, we need to replicate this method with the dataset we are collecting this year that has 600 students using the system for 10 times as long.