Fixed-Horizon Temporal Difference Methods for Stable Reinforcement Learning

Authors

  • Kristopher De Asis University of Alberta
  • Alan Chan University of Alberta
  • Silviu Pitis University of Toronto
  • Richard Sutton University of Alberta
  • Daniel Graves Huawei Technologies Canada, Ltd.

DOI:

https://doi.org/10.1609/aaai.v34i04.5784

Abstract

We explore fixed-horizon temporal difference (TD) methods, reinforcement learning algorithms for a new kind of value function that predicts the sum of rewards over a fixed number of future time steps. To learn the value function for horizon h, these algorithms bootstrap from the value function for horizon h−1, or some shorter horizon. Because no value function bootstraps from itself, fixed-horizon methods are immune to the stability problems that plague other off-policy TD methods using function approximation (also known as “the deadly triad”). Although fixed-horizon methods require the storage of additional value functions, this gives the agent additional predictive power, while the added complexity can be substantially reduced via parallel updates, shared weights, and n-step bootstrapping. We show how to use fixed-horizon value functions to solve reinforcement learning problems competitively with methods such as Q-learning that learn conventional value functions. We also prove convergence of fixed-horizon temporal difference methods with linear and general function approximation. Taken together, our results establish fixed-horizon TD methods as a viable new way of avoiding the stability problems of the deadly triad.

Downloads

Published

2020-04-03

How to Cite

De Asis, K., Chan, A., Pitis, S., Sutton, R., & Graves, D. (2020). Fixed-Horizon Temporal Difference Methods for Stable Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3741-3748. https://doi.org/10.1609/aaai.v34i04.5784

Issue

Section

AAAI Technical Track: Machine Learning