Abstract:
Many of the inferences and decisions which contribute to understanding involve fallible assumptions. When these assumptions are under-mined, computational models of comprehension should respond rationally. This paper crossbreeds AI research on problem solving and understanding to produce a hybrid model ("reasoned understand-ing"). In particular, the paper shows how non-monotonic dependencies [Doyle79] enable a schema-based story processor to adjust to new information requiring the retraction of assumptions.