Symbolic Plans as High-Level Instructions for Reinforcement Learning

Authors

  • León Illanes University of Toronto
  • Xi Yan University of Toronto
  • Rodrigo Toro Icarte University of Toronto
  • Sheila A. McIlraith University of Toronto

DOI:

https://doi.org/10.1609/icaps.v30i1.6750

Abstract

Reinforcement learning (RL) agents seek to maximize the cumulative reward obtained when interacting with their environment. Users define tasks or goals for RL agents by designing specialized reward functions such that maximization aligns with task satisfaction. This work explores the use of high-level symbolic action models as a framework for defining final-state goal tasks and automatically producing their corresponding reward functions. We also show how automated planning can be used to synthesize high-level plans that can guide hierarchical RL (HRL) techniques towards efficiently learning adequate policies. We provide a formal characterization of taskable RL environments and describe sufficient conditions that guarantee we can satisfy various notions of optimality (e.g., minimize total cost, maximize probability of reaching the goal). In addition, we do an empirical evaluation that shows that our approach converges to near-optimal solutions faster than standard RL and HRL methods and that it provides an effective framework for transferring learned skills across multiple tasks in a given environment.

Downloads

Published

2020-06-01

How to Cite

Illanes, L., Yan, X., Toro Icarte, R., & McIlraith, S. A. (2020). Symbolic Plans as High-Level Instructions for Reinforcement Learning. Proceedings of the International Conference on Automated Planning and Scheduling, 30(1), 540-550. https://doi.org/10.1609/icaps.v30i1.6750