Deep Model-Based Reinforcement Learning via Estimated Uncertainty and Conservative Policy Optimization

Authors

  • Qi Zhou University of Science and Technology of China
  • HouQiang Li University of Science and Technology of China
  • Jie Wang University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i04.6177

Abstract

Model-based reinforcement learning algorithms tend to achieve higher sample efficiency than model-free methods. However, due to the inevitable errors of learned models, model-based methods struggle to achieve the same asymptotic performance as model-free methods. In this paper, We propose a Policy Optimization method with Model-Based Uncertainty (POMBU)—a novel model-based approach—that can effectively improve the asymptotic performance using the uncertainty in Q-values. We derive an upper bound of the uncertainty, based on which we can approximate the uncertainty accurately and efficiently for model-based methods. We further propose an uncertainty-aware policy optimization algorithm that optimizes the policy conservatively to encourage performance improvement with high probability. This can significantly alleviate the overfitting of policy to inaccurate models. Experiments show POMBU can outperform existing state-of-the-art policy optimization algorithms in terms of sample efficiency and asymptotic performance. Moreover, the experiments demonstrate the excellent robustness of POMBU compared to previous model-based approaches.

Downloads

Published

2020-04-03

How to Cite

Zhou, Q., Li, H., & Wang, J. (2020). Deep Model-Based Reinforcement Learning via Estimated Uncertainty and Conservative Policy Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6941-6948. https://doi.org/10.1609/aaai.v34i04.6177

Issue

Section

AAAI Technical Track: Machine Learning