Track:
Contents
Downloads:
Abstract:
Probabilistic planning problems are often modeled as Markov decision problems (MDPs), which assume that a single action is executed per decision epoch and that actions take unit time. However, in the real world it is common to execute several actions in parallel, and the durations of these actions may differ. This paper presents our ongoing work on incorporating concurrent, durative actions in probabilistic planning. In particular we describe Concurrent MDPs, MDPs which allow multiple instantaneous actions to be executed simultaneously, and present two algorithms which perform orders of magnitude faster than naive approaches. We then add explicit action durations into our model and encode them as concurrent MDPs in an augmented state space. We present a novel heuristic and prove it admissible. Initial experimental results demonstrate the promise of our approach, showing speedup due to both the heuristic and our sampled RTDP algorithm.