Weighing Hypotheses: Incremental Learning from Noisy Data

Philip Laird

Incremental learning from noisy data presents dual challenges: that of evaluating multiple hypotheses incrementally and that of distinguishing errors due to noise from errors due to faulty hypotheses. This problem is critical in such areas of machine learning as concept learning, inductive programming, and sequence prediction. I develop a general, quantitative method for weighing the merits of different hypotheses in light of their performance on possibly noisy data. The method is incremental, independent of the hypothesis space, and grounded in Bayesian probability.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.