Proceedings:
Book One
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 19
Track:
Learning
Downloads:
Abstract:
This paper presents a new boosting (arcing) algorithm called POCA, Parallel Online Continuous Arcing. Unlike traditional boosting algorithms (such as Arc-x4 and Adaboost), that construct ensembles by adding and training weak learners sequentially on a round-by-round basis, training in POCA is performed over an entire ensemble continuously and in parallel. Since members of the ensemble are not frozen after an initial learning period (as in traditional boosting) POCA is able to adapt rapidly to nonstationary environments, and because POCA does not require the explicit scoring of a fixed exemplar set, it can perform online learning of non-repeating data. We present results from experiments conducted using neural network experts that show POCA is typically faster and more adaptive than existing boosting algorithms. Results presented for the UCI letter dataset are, to our knowledge, the best published scores to date.
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 19