Track:
Contents
Downloads:
Abstract:
When AI technologies are applied to real-world problems, it is often difficult for developers to anticipate all the knowledge needed. Previous research has shown that introspective reasoning can be a useful tool for helping to address this problem in case-based reasoning systems, by enabling them to augment their routine learning of cases with learning to make better use of their cases, as problem-solving experience reveals deficiencies in their reasoning process. In this paper we present a new introspective model for autonomously improving the performance of a CBR system by reasoning about system problem solving failures. We illustrate its benefits with experimental results from tests in an industrial design application.