AAAI Publications, 2018 AAAI Spring Symposium Series

Font Size: 
Inverse Reinforcement Learning via Nonparametric Subgoal Modeling
Adrian Šošić, Abdelhak M. Zoubir, Heinz Koeppl

Last modified: 2018-03-15


Recent advances in the field of inverse reinforcement learning (IRL) have yielded sophisticated frameworks which relax the original modeling assumption that the behavior of an observed agent reflects only a single intention. Instead, the demonstration data is separated into parts to account for the fact that different trajectories may correspond to different intentions, e.g., because they were generated by different domain experts. In this work, we go one step further: using the intuitive concept of subgoals, we build upon the premise that even a single trajectory can be explained more efficiently locally within a certain context than globally, enabling a more compact representation of the observed behavior. Based on this assumption, we build an implicit intentional model of the agent's goals to forecast its behavior in unobserved situations. The result is an integrated Bayesian prediction framework which provides spatially smooth policy estimates that are consistent with the expert's plan and significantly outperform existing IRL solutions. In addition, the framework can be naturally extended to handle scenarios with time-varying expert intentions.


inverse reinforcement learning; Bayesian nonparametric modeling; subgoal inference; graphical models; Gibbs sampling

Full Text: PDF