Track:
All Papers
Downloads:
Abstract:
Probabilistic planning problems are often modeled as Markov decision processes (MDPs), which assume that a single unit-length action is executed per decision epoch. However, in the real world it is common to execute several actions in parallel. This paper presents efficient methods for solving probabilistic planning problems with concurrent, durative actions. We extend Concurrent MDPs, MDPs which allow multiple instantaneous actions to be executed simultaneously, by adding explicit action durations. We present two novel admissible heuristics and one inadmissible heuristic to speed up the convergence. We also develop a novel notion of hybridizing an optimal and an approximate algorithm to yield a hybrid algorithm, which quickly generates high-quality policies. Experiments show that all our heuristics speedup the policy construction significantly. Furthermore, our approximate hybrid algorithm runs up to two orders of magnitude faster than other methods, while producing policies close to optimal.