Proceedings:
Training Issues in Incremental Learning
Volume
Issue:
Training Issues in Incremental Learning
Track:
Contents
Downloads:
Abstract:
This paper studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only "charged" for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, ..., n-l} These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis dimension. We show that in general, the VC-Dimension and the self-directed learning complexity are incomparable, however, for some special cases we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell’s version space algorithm and the existence of self-directed learning algorithms that make few mistakes.
Spring
Training Issues in Incremental Learning