Track:
All Papers
Downloads:
Abstract:
This study reports on an experiment that analyzes a variety of entailment evaluations provided by a lexico-syntactic tool, the Entailer. The environment for these analyses is from a corpus of self-explanations taken from the Intelligent Tutoring System, iSTART. The purpose of this study is to examine how evaluations of hand-coded entailment, paraphrase, and elaboration compare to various evaluations provided by the Entailer. The evaluations include standard entailment (forward) as well as the new indices of Reverse- and Average-Entailment. The study finds that the Entailer’s indices match or surpass human evaluators in making textual evaluations. The findings have important implications for providing accurate and appropriate feedback to users of Intelligent Tutoring Systems.