Proceedings:
Proceedings of the International Symposium on Combinatorial Search, 14
Volume
Issue:
Vol. 14 No. 1 (2021): Fourteenth International Symposium on Combinatorial Search
Track:
Extended Abstracts
Downloads:
Abstract:
Despite seminal advances in reinforcement learning in recent years, many domains where the rewards are sparse, e.g. given only at task completion, remain quite challenging. In such cases, it can be beneficial to tackle the task both from its beginning and end, and make the two ends meet. Existing approaches that do so, however, are not effective in the common scenario where the strategy needed near the end goal is very different from the one that is effective earlier on. In this work we propose a novel RL approach for such settings. In short, we first train a backward-looking agent with a simple relaxed goal, and then augment the state representation of the forward-looking agent with straightforward hint features. This allows the learned forward agent to leverage information from backward plans, without mimicking their policy. We demonstrate the efficacy of our approach on the challenging game of Sokoban, where we substantially surpass learned solvers that generalize across levels, and are competitive with SOTA performance of the best highly-crafted systems. Impressively, we achieve these results while learning from a small number of practice levels and using simple RL techniques.
DOI:
10.1609/socs.v12i1.18580
SOCS
Vol. 14 No. 1 (2021): Fourteenth International Symposium on Combinatorial Search