Track:
Contents
Downloads:
Abstract:
We present a new method for automatically creating useful temporally-extended actions in reinforcement learning. Our method identifies states that lie between two densely-connected regions of the state space and generates temporally-extended actions (e.g., options) that take the agent efficiently to these states. We search for these states using graph partitioning methods on local views of the transition graph. This local perspective is a key property of our algorithms that differentiates it from most of the earlier work in this area, and one that allows it to scale to problems with large state spaces.