Proceedings:
Proceedings of the International Symposium on Combinatorial Search, 11
Volume
Issue:
Vol. 11 No. 1 (2018): Eleventh Annual Symposium on Combinatorial Search
Track:
Full Papers
Downloads:
Abstract:
Algorithm selection approaches have achieved impressive performance improvements in many areas of AI. Most of the literature considers the offline algorithm selection problem, where the initial selection model is never updated after training. However, new data from running algorithms on instances becomes available when algorithms are selected and run. We investigate how this online data can be used to improve the selection model over time. This is especially relevant when insufficient training instances were used, but potentially improves the performance of algorithm selection in all cases. We formally define the online algorithm selection problem and model it as a contextual multi-armed bandit problem, propose a methodology for solving it, and empirically demonstrate performance improvements. We also show that our online algorithm selection method can be used when no training data whatsoever is available, a setting where offline algorithm selection cannot be used. Our experiments indicate that a simple greedy approach achieves the best performance.
DOI:
10.1609/socs.v9i1.18458
SOCS
Vol. 11 No. 1 (2018): Eleventh Annual Symposium on Combinatorial Search