A Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models

Jonathan Y Ito, David V. Pynadath, Stacy C. Marsella

Agents face the problem of maintaining and updating their beliefs over the possible mental models (whether goals, plans, activities, intentions, etc.) of other agents in many multiagent domains. Decision-theoretic agents typically model their uncertainty in these beliefs as a probability distribution over their possible mental models of others. They then update their beliefs by computing a posterior probability over mental models conditioned on their observations. We present a novel algorithm for performing this belief update over mental models that are in the form of Partially Observable Markov Decision Problems (POMDPs). POMDPs form a common model for decision-theoretic agents, but there is no existing method for translating a POMDP, which generates deterministic behavior, into a probability distribution over actions that is appropriate for abductive reasoning. In this work, we explore alternate methods to generate a more suitable probability distribution. We use a sample multiagent scenario to demonstrate the different behaviors of the approaches and to draw some conclusions about the conditions under which each is successful.

Subjects: 7.1 Multi-Agent Systems; 3.4 Probabilistic Reasoning

Submitted: May 15, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.