Proceedings:
Distributed Plan and Schedule Management
Volume
Issue:
Papers from the 2006 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
An individual planning agent does not generally have sufficient computational resources at its disposal to produce an optimal plan in a complex domain, as deliberation itself requires and consumes scarce resources. This problem is further exacerbated in a distributed planning context in which multiple, heterogeneous agents must expend a portion of their resource allotment on communication, negotiation, and shared planning activities with other cooperative agents. Because other agents can have different temporal grain sizes, planning horizons, deadlines, and access to distinct local information, the delays associated with local deliberation and, in turn, shared negotiation are asynchronous, unpredictable, and widely variable. We address this problem using a principled, decision-theoretic approach based on recent advances in Generalized Semi-Markov Decision Processes (GSMDPs). In particular, we use GSMDPs to model agents who develop a continuous-time deliberation policy offline which can then be consulted to dynamically select both deliberation-level and domain-level actions at plan execution time. This scheme allows individual agents to model other cooperative agents' actions essentially as asynchronous events, e.g., that might or might not fulfill a request (uncertain effect) after a stochastically-determined delay (uncertain event duration). With this approach, the decision-theoretic planner for the individual agent can make near-optimal execution-time decisions that trade off the risks and opportunities associated with their own actions, other agents' actions, and asynchronous external threats.
Spring
Papers from the 2006 AAAI Spring Symposium