Proceedings:
Proceedings of the Twentieth International Conference on Machine Learning
Volume
Issue:
Proceedings of the Twentieth International Conference on Machine Learning
Track:
Contents
Downloads:
Abstract:
A boosting algorithm seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. This paper studies convergence of boosting when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting’s greedy optimization to the infinimum of the loss function over the linear span. As a side product, these results reveal the importance of restricting the greedy search step sizes, as known in practice through the works of Friedman and others.
ICML
Proceedings of the Twentieth International Conference on Machine Learning