Model-Free IRL Using Maximum Likelihood Estimation

Authors

  • Vinamra Jain University of Georgia
  • Prashant Doshi University of Georgia
  • Bikramjit Banerjee University of Southern Mississippi

DOI:

https://doi.org/10.1609/aaai.v33i01.33013951

Abstract

The problem of learning an expert’s unknown reward function using a limited number of demonstrations recorded from the expert’s behavior is investigated in the area of inverse reinforcement learning (IRL). To gain traction in this challenging and underconstrained problem, IRL methods predominantly represent the reward function of the expert as a linear combination of known features. Most of the existing IRL algorithms either assume the availability of a transition function or provide a complex and inefficient approach to learn it. In this paper, we present a model-free approach to IRL, which casts IRL in the maximum likelihood framework. We present modifications of the model-free Q-learning that replace its maximization to allow computing the gradient of the Q-function. We use gradient ascent to update the feature weights to maximize the likelihood of expert’s trajectories. We demonstrate on two problem domains that our approach improves the likelihood compared to previous methods.

Downloads

Published

2019-07-17

How to Cite

Jain, V., Doshi, P., & Banerjee, B. (2019). Model-Free IRL Using Maximum Likelihood Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3951-3958. https://doi.org/10.1609/aaai.v33i01.33013951

Issue

Section

AAAI Technical Track: Machine Learning