Learning from Interventions Using Hierarchical Policies for Safe Learning

Authors

  • Jing Bi University of Rochester
  • Vikas Dhiman University of California San Diego
  • Tianyou Xiao University of Rochester
  • Chenliang Xu University of Rochester

DOI:

https://doi.org/10.1609/aaai.v34i06.6602

Abstract

Learning from Demonstrations (LfD) via Behavior Cloning (BC) works well on multiple complex tasks. However, a limitation of the typical LfD approach is that it requires expert demonstrations for all scenarios, including those in which the algorithm is already well-trained. The recently proposed Learning from Interventions (LfI) overcomes this limitation by using an expert overseer. The expert overseer only intervenes when it suspects that an unsafe action is about to be taken. Although LfI significantly improves over LfD, the state-of-the-art LfI fails to account for delay caused by the expert's reaction time and only learns short-term behavior. We address these limitations by 1) interpolating the expert's interventions back in time, and 2) by splitting the policy into two hierarchical levels, one that generates sub-goals for the future and another that generates actions to reach those desired sub-goals. This sub-goal prediction forces the algorithm to learn long-term behavior while also being robust to the expert's reaction time. Our experiments show that LfI using sub-goals in a hierarchical policy framework trains faster and achieves better asymptotic performance than typical LfD.

Downloads

Published

2020-04-03

How to Cite

Bi, J., Dhiman, V., Xiao, T., & Xu, C. (2020). Learning from Interventions Using Hierarchical Policies for Safe Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10352-10360. https://doi.org/10.1609/aaai.v34i06.6602

Issue

Section

AAAI Technical Track: Robotics