IMEX: Overcoming Intractability in Explanation Based Learning

Michael S. Braverman, Stuart J. Russell

Compiled knowledge, which allows macro inference steps through an explanation space, can enable explanation-based learning (EBL) systems to reason efficiently in complex domains. Without this knowledge, the explanation of goal concepts is not generally feasible; moreover, the problem of finding the most general operational concept definition is intractable. Unfortunately, the use of compiled knowledge leads to explanations which yield overly specific concept definitions. These concept definitions may be overly specific in one of two ways: either a similar concept definition with one or more constants changed to variables is operational, or a concept definition which is more general, according to the implication rules of the domain theory, is operational. This paper introduces a method (ME%) for modifying, in a directed manner, the explanation structures of goal concepts that have been derived using compiled knowledge. In this way, more general operational concept definitions may be obtained.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.