Proceedings:
Distributed Plan and Schedule Management
Volume
Issue:
Papers from the 2006 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
In recent work on the DARPA COORDINATORs program, we have developed multi-agent Markov-decision process (MDP) techniques for distributed plan management. The COORDINATORs problems arrive in distributed form, with different agents getting local views of their portion of the problem and its relationship to others. Even so, the individual agents' MDPs that capture their local planning and scheduling problem can be too large to enumerate and solve. Furthermore, the COORDINATORs agents must build and execute their plans in real-time, interacting with a world simulation that makes their actions have uncertain outcome. Accordingly, we have developed an embedded agent system that negotiates to try to find approximately-optimal distributed policies within tight time constraints. Contributions of our work include "unrolling" techniques for translating local hierarchical task networks to MDPs, "informed" heuristic search control of the unrolling process, and negotiation methods for allocating responsibilities across cooperating agents and using those allocations to influence local policy construction.
Spring
Papers from the 2006 AAAI Spring Symposium