AAAI Publications, Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence

Font Size: 
A Proposal for Behavior Prediction via Estimating Agents’ Evaluation Functions Using Prior Observations of Behavior
Robert Tyler Loftin, David L. Roberts

Last modified: 2015-04-01


In this work we present a theoretical approach (not currently implemented), to the problem of predicting agent behavior. The ultimate goal of this work is to learn models that can be used to predict the future actions of intelligent agents, based on previously recorded data on those agents’ behavior. We believe that we can improve the predictive accuracy of our models by assuming that an agent reasons about the actions it takes, and trying to explicitly model that reasoning process. Here, we model an agent’s reasoning process as a form of Monte-Carlo search, and attempt to learn a state evaluation function that, when used with this planning algorithm, yields a similar distribution of actions given the current state of the world as we observe in the data. While it is simple to simulate Monte-Carlo search given an evaluation function, it is much more difficult to determine an evaluation function that will generate a certain behavior. Here we will use Expectation-Maximization to find a maximum likelihood estimate of the parameters of the evaluation function, treating the actual steps taken in planning each action as unobserved data.


Behavior Prediction; Behavior Recognition; Data Mining; Machine Learning; Inverse Reinforcement Learning; Planning; Monte Carlo Planning; Expectation Maximization; Multiagent Systems

Full Text: PDF