AAAI Publications, Workshops at the Twenty-Fourth AAAI Conference on Artificial Intelligence

Font Size: 
Teamwork and Coordination under Model Uncertainty in DEC-POMDPs
Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, Milind Tambe

Last modified: 2010-07-07


Distributed Partially Observable Markov Decision Processes (DEC-POMDPs) are a popular planning framework for multiagent teamwork to compute (near-)optimal plans. However, these methods assume a complete and correct world model, which is often violated in real-world domains. We provide a new algorithm for DEC-POMDPs that is more robust to model uncertainty, with a focus on domains with sparse agent interactions. Our STC algorithm relies on the following key ideas: (1) reduce planning-time computation by shifting some of the burden to execution-time reasoning, (2) exploit sparse interactions between agents, and (3) maintain an approximate model of agents’ beliefs. We empirically show that STC is often substantially faster to existing DEC-POMDP methods without sacrificing reward performance.


DEC-POMDPs; Model Uncertainty; Execution-time Reasoning; Communication Reasoning

Full Text: PDF