Learning Multiple Models without Sacrificing Comprehensibility

Pedro Domingos

Learning multiple models and combining their results can often lead to significant accuracy improvements over the single "best" model. This area of research has recently received much attention (e.g., Chan, Stolfo, and Wolpert 1996). However, as Breiman (1996) notes, when the models being combined are "human-readable" (as is the case with, for example, decision trees and rule sets), the cost of this procedure is the loss of the comprehensibility afforded by the single model. Not only is the complexity of m models m times greater than that of one, but it is difficult and tedious for a human to predict the output of the model ensemble, and thus to understand its behavior. This can be a significant disadvantage, since comprehensibility is often of paramount importance to make the learner’s output acceptable to the users, to allow interactive refinement of the model produced, and to gain knowledge of the domain. This extended abstract describes and evaluates a method for retaining most of the accuracy improvements obtained by multiple model approaches, while still producing a single comprehensible model.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.