Knowledge Level and Inductive Uses of Chunking (EBL)

Paul S. Rosenbloom, Jans Aasman

When explanation-based learning (EBL) is used for knowledge level learning (KLL), training examples are essential, and EBL is not simply reducible to partial evaluation. A key enabling factor in this behavior is the use of domain theories in which not every element is believed a priori. When used with such domain theories EBL provides a basis for rote learning (deductive KLL) and induction from multiple examples (nondeductive KLL). This article lays the groundwork for using EBL in KLL, by describing how EBL can lead to increased belief, and describes new results from using Soar’s chunking mechanism - a variation on EBL - as the basis for a task-independent rote learning capability and a version-space-based inductive capability. This latter provides a compelling demonstration of nondeductive KLL in Soar, and provides the basis for an integration of conventional EBL with induction. However, it also reveals how one of Soar’s key assumptions - the non-penetrable memory assumption - makes this more complicated than it would otherwise be. This complexity may turn out to be appropriate, or it may point to where modifications of Soar are needed.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.