Trust Region Evolution Strategies

Authors

  • Guoqing Liu University of Science and Technology of China
  • Li Zhao Microsoft Research
  • Feidiao Yang Chinese Academy of Sciences
  • Jiang Bian Microsoft Research
  • Tao Qin Microsoft Research Asia
  • Nenghai Yu University of Science and Technology of China
  • Tie-Yan Liu Microsoft

DOI:

https://doi.org/10.1609/aaai.v33i01.33014352

Abstract

Evolution Strategies (ES), a class of black-box optimization algorithms, has recently been demonstrated to be a viable alternative to popular MDP-based RL techniques such as Qlearning and Policy Gradients. ES achieves fairly good performance on challenging reinforcement learning problems and is easier to scale in a distributed setting. However, standard ES algorithms perform one gradient update per data sample, which is not very efficient. In this paper, with the purpose of more efficient using of sampled data, we propose a novel iterative procedure that optimizes a surrogate objective function, enabling to reuse data sample for multiple epochs of updates. We prove monotonic improvement guarantee for such procedure. By making several approximations to the theoretically-justified procedure, we further develop a practical algorithm called Trust Region Evolution Strategies (TRES). Our experiments demonstrate the effectiveness of TRES on a range of popular MuJoCo locomotion tasks in the OpenAI Gym, achieving better performance than ES algorithm.

Downloads

Published

2019-07-17

How to Cite

Liu, G., Zhao, L., Yang, F., Bian, J., Qin, T., Yu, N., & Liu, T.-Y. (2019). Trust Region Evolution Strategies. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4352-4359. https://doi.org/10.1609/aaai.v33i01.33014352

Issue

Section

AAAI Technical Track: Machine Learning