Adaptive Trust Region Policy Optimization: Global Convergence and Faster Rates for Regularized MDPs

Authors

  • Lior Shani Technion
  • Yonathan Efroni Technion
  • Shie Mannor Technion

DOI:

https://doi.org/10.1609/aaai.v34i04.6021

Abstract

Trust region policy optimization (TRPO) is a popular and empirically successful policy search algorithm in Reinforcement Learning (RL) in which a surrogate problem, that restricts consecutive policies to be ‘close’ to one another, is iteratively solved. Nevertheless, TRPO has been considered a heuristic algorithm inspired by Conservative Policy Iteration (CPI). We show that the adaptive scaling mechanism used in TRPO is in fact the natural “RL version” of traditional trust-region methods from convex analysis. We first analyze TRPO in the planning setting, in which we have access to the model and the entire state space. Then, we consider sample-based TRPO and establish Õ(1/√N) convergence rate to the global optimum. Importantly, the adaptive scaling mechanism allows us to analyze TRPO in regularized MDPs for which we prove fast rates of Õ(1/N), much like results in convex optimization. This is the first result in RL of better rates when regularizing the instantaneous cost or reward.

Downloads

Published

2020-04-03

How to Cite

Shani, L., Efroni, Y., & Mannor, S. (2020). Adaptive Trust Region Policy Optimization: Global Convergence and Faster Rates for Regularized MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5668-5675. https://doi.org/10.1609/aaai.v34i04.6021

Issue

Section

AAAI Technical Track: Machine Learning