Investigating the Validity of a Test Case Selection Methodology for Expert System Validation

Jan-Eike Michels, Avelino J. Gonzalez, Thomas Abel, Rainer Knauf

Providing assurances of performance is an important aspect of successful development and commercialization of expert systems. However, this can only be done if the quality of the system can be assured through a rigorous and effective validation process. However, a generally accepted validation technique that can, if implemented properly, lead to a determination of validity (a validity statement) has been an elusive goal. This has led to a generally haphazard way of validating expert systems. Validation has traditionally been mostly done through the use of test cases. A set of test cases, whose solution is previously known and benchmarked, is presented to the expert system. A comparison of the system’s solutions to that of the test cases is then used to somehow generate a validity statement. It is an intuitive way of testing the performance of any system, but it does require some consideration as to how extensively to test the system in order to develop a reliable validity statement. One completely reliable statement of a system’s validity could result from exhaustive testing of the system. However, that is commonly considered to be impractical for all but the most trivial of systems. A better means to select "good" test cases must be developed. The authors have developed a framework for such a system (Abel, Knauf and Gonzalez 1996). This paper describes an investigation undertaken to evaluate the effectiveness of this framework by validating a small but robust expert system to classify birds using this framework.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.