Deriving Subgoals Autonomously to Accelerate Learning in Sparse Reward Domains

Authors

  • Michael Dann RMIT University
  • Fabio Zambetta RMIT University
  • John Thangarajah RMIT University

DOI:

https://doi.org/10.1609/aaai.v33i01.3301881

Abstract

Sparse reward games, such as the infamous Montezuma’s Revenge, pose a significant challenge for Reinforcement Learning (RL) agents. Hierarchical RL, which promotes efficient exploration via subgoals, has shown promise in these games. However, existing agents rely either on human domain knowledge or slow autonomous methods to derive suitable subgoals. In this work, we describe a new, autonomous approach for deriving subgoals from raw pixels that is more efficient than competing methods. We propose a novel intrinsic reward scheme for exploiting the derived subgoals, applying it to three Atari games with sparse rewards. Our agent’s performance is comparable to that of state-of-the-art methods, demonstrating the usefulness of the subgoals found.

Downloads

Published

2019-07-17

How to Cite

Dann, M., Zambetta, F., & Thangarajah, J. (2019). Deriving Subgoals Autonomously to Accelerate Learning in Sparse Reward Domains. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 881-889. https://doi.org/10.1609/aaai.v33i01.3301881

Issue

Section

AAAI Technical Track: Applications