Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance

Authors

  • Mingxuan Jing Tsinghua University
  • Xiaojian Ma UCLA
  • Wenbing Huang Tsinghua University
  • Fuchun Sun Tsinghua University
  • Chao Yang Tsinghua University
  • Bin Fang Tsinghua University
  • Huaping Liu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i04.5953

Abstract

In this paper, we study Reinforcement Learning from Demonstrations (RLfD) that improves the exploration efficiency of Reinforcement Learning (RL) by providing expert demonstrations. Most of existing RLfD methods require demonstrations to be perfect and sufficient, which yet is unrealistic to meet in practice. To work on imperfect demonstrations, we first define an imperfect expert setting for RLfD in a formal way, and then point out that previous methods suffer from two issues in terms of optimality and convergence, respectively. Upon the theoretical findings we have derived, we tackle these two issues by regarding the expert guidance as a soft constraint on regulating the policy exploration of the agent, which eventually leads to a constrained optimization problem. We further demonstrate that such problem is able to be addressed efficiently by performing a local linear search on its dual form. Considerable empirical evaluations on a comprehensive collection of benchmarks indicate our method attains consistent improvement over other RLfD counterparts.

Downloads

Published

2020-04-03

How to Cite

Jing, M., Ma, X., Huang, W., Sun, F., Yang, C., Fang, B., & Liu, H. (2020). Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5109-5116. https://doi.org/10.1609/aaai.v34i04.5953

Issue

Section

AAAI Technical Track: Machine Learning