Track:
All Papers
Downloads:
Abstract:
The quality of a learning algorithm is characterized by the accuracy, stability and comprehensibility of the models it generates. Though ensembles produce accurate and stable classifiers, they are hard to interpret. In this paper, we propose a meta-learning method for ID3 that makes use of an ensemble for gaining accuracy and stability and yet produces a single comprehensible classifier. The main idea here is to generate additional examples at every stage of the decision tree construction process and use them to find the best attribute test. These new examples are classified using the ensemble constructed from the original training set. The number of new examples generated depends on the size of the input attribute space and the input attribute values of new examples are partially determined by the algorithm. Existing work in this area deals with the generation of a fixed number of random examples. Experimental analysis shows that our approach is superior to the existing work in retaining accuracy and stability gains provided by the ensemble classifier.