Deep Bayesian Nonparametric Learning of Rules and Plans from Demonstrations with a Learned Automaton Prior

  • Brandon Araki MIT
  • Kiran Vodrahalli Columbia University
  • Thomas Leech MIT
  • Cristian-Ioan Vasile MIT
  • Mark Donahue MIT Lincoln Laboratory
  • Daniela Rus MIT CSAIL

Abstract

We introduce a method to learn imitative policies from expert demonstrations that are interpretable and manipulable. We achieve interpretability by modeling the interactions between high-level actions as an automaton with connections to formal logic. We achieve manipulability by integrating this automaton into planning, so that changes to the automaton have predictable effects on the learned behavior. These qualities allow a human user to first understand what the model has learned, and then either correct the learned behavior or zero-shot generalize to new, similar tasks. We build upon previous work by no longer requiring additional supervised information which is hard to collect in practice. We achieve this by using a deep Bayesian nonparametric hierarchical model. We test our model on several domains and also show results for a real-world implementation on a mobile robotic arm platform.

Published
2020-04-03
Section
AAAI Technical Track: Reasoning under Uncertainty