Proceedings:
Vol. 19 (2009): Nineteenth International Conference on Automated Planning and Scheduling
Volume
Issue:
Vol. 19 (2009): Nineteenth International Conference on Automated Planning and Scheduling
Track:
Long Papers
Downloads:
Abstract:
Solving multiagent planning problems modeled as DEC-POMDPs is an important challenge. These models are often solved by using dynamic programming, but the high resource usage of current approaches results in limited scalability. To improve the efficiency of dynamic programming algorithms, we propose a new backup algorithm that is based on a reachability analysis of the state space. This method, which we call incremental policy generation, can be used to produce an optimal solution for any possible initial state or further scalability can be achieved by making use of a known start state. When incorporated into the optimal dynamic programming algorithm, our experiments show that planning horizon can be increased due to a marked reduction in resource consumption. This approach also fits nicely with approximate dynamic programming algorithms. To demonstrate this, we incorporate it into the state-of-the-art PBIP algorithm and show significant performance gains. The results suggest that the performance of other dynamic programming algorithms for DEC-POMDPs could be similarly improved by integrating the incremental policy generation approach.
DOI:
10.1609/icaps.v19i1.13355
ICAPS
Vol. 19 (2009): Nineteenth International Conference on Automated Planning and Scheduling