Track:
Contents
Downloads:
Abstract:
The ability to generate explanations plays a central role in human cognition. Generating explanations requires a deep conceptual understanding of the domain in question and tremendous flexibility in the way concepts are accessed and used. Together, these requirements constitute challenging design requirements for a model of explanation. We describe our progress toward providing a such a model, based on the LISA model of analogical inference. We augment LISA with a novel representation of causal relations, and with an ability to flexibly combine knowledge from multiple sources in LTM without falling victim to the type-token problem. We demonstrate how the resulting model can serve as a starting point for an explicit process model of explanation.