Probabilistic Plan Recognition in Multiagent Systems

Suchi Saria and Sridhar Mahadevan

We present a theoretical framework for online probabilistic plan recognition in cooperative multiagent systems. Our model extends the Abstract Hidden Markov Model (AHMM) (Bui, Venkatesh, and West 2002), and consists of a hierarchical dynamic Bayes network that allows reasoning about the interaction among multiple cooperating agents. We provide an in-depth analysis of two different policy termination schemes, Tall and Tany for concurrent action introduced in (Rohanimanesh and Mahadevan 2003). In the Tall scheme, a joint policy terminates only when all agents have terminated executing their individual policies. In the Tany scheme, a joint policy terminates as soon as any of the agents terminates executing its individual policy. Since exact inference is intractable, we describe an approximate algorithm using Rao-Blackwellized particle filtering. Our approximate inference procedure reduces the complexity from exponential time in N, the number of agents and K, the number of levels, to time linear in both N and K^ ≤ K (the lowest-level of plan coordination) for the Tall termination scheme and O(N log N) and linear in K^ for the Tany termination scheme.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.