Published:
2020-06-02
Proceedings:
Proceedings of the AAAI Conference on Artificial Intelligence, 34
Volume
Issue:
Vol. 34 No. 03: AAAI-20 Technical Tracks 3
Track:
AAAI Technical Track: Heuristic Search and Optimization
Downloads:
Abstract:
Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LeadingOnes in the best expected time achievable with the available heuristics, if sufficiently long learning periods τ are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: Ridge, where the HH has to learn that RLS1 is always the best low-level heuristic, and OneMax, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods τ are required for the HH to achieve optimal expected runtime for Ridge. Conversely, a sub-logarithmic learning period is the best static choice for OneMax, while using super-linear values for τ increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both OneMax and Ridge. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.
DOI:
10.1609/aaai.v34i03.5617
AAAI
Vol. 34 No. 03: AAAI-20 Technical Tracks 3
ISSN 2374-3468 (Online) ISSN 2159-5399 (Print) ISBN 978-1-57735-835-0 (10 issue set)
Published by AAAI Press, Palo Alto, California USA Copyright © 2020, Association for the Advancement of Artificial Intelligence All Rights Reserved