Proceedings:
Proceedings of the AAAI Conference on Artificial Intelligence, 5
Volume
Issue:
Science
Track:
Learning
Downloads:
Abstract:
Heuristic inference rules with a measure of strength less than certainty have an unusual property: better individual rules do not necessarily lead to a better overall rule set. All less-than-certain rules contribute evidence towards erroneous conclusions for some problem instances, and the distribution of these erroneous conclusions over the instances is not necessarily related to individual rule quality. This has important consequences for automatic machine learning of rules, since rule selection is usually based on measures of quality of individual rules. In this paper, we explain why the most obvious and intuitively reasonable solution to this problem, incremental modification and deletion of rules responsible for wrong conclusions a la Teiresias, is not always appropriate. In our experience, it usually fails to converge to an optimal set of rules. Given a set of heuristic rules, we explain why the best rule set should be considered to be the element of the power set of rules that yields a global minimum error with respect to generating erroneous positive and negative conclusions. This selection process is modeled as a bipartite graph minimization problem and shown to be NP-complete. A solution method is described, the Antidote Algorithm, that performs a model-directed search of the rule space. On an example from medical diagnosis, the Antidote Algorithm significantly reduced the number of misdiagnoses when applied to a rule set generated from 104 training instances.
AAAI
Science