Unrolling Complex Task Models into MDPs

Robert P. Goldman, David J. Musliner, Mark S. Boddy, Edmund H. Durfee, Edmund H. Durfee

Markov Decision Processes (MDPs) provide a unifying theoretical framework for reasoning about action under uncertainty, and in some domains provide a direct solution. However, many AI approaches rely on richly expressive task models that cannot be readily captured as MDPs. We have developed a technique for ``unrolling'' such task models into an MDP that can be solved for an (approximately) optimal policy. In this paper, we describe how we have built a decision-theoretic agent that solves planning/scheduling problems formulated in a dialect of the TAEMS hierarchical task language. One problem with bridging from expressive languages to MDPs is that these other languages may not make the same simplifying assumptions as those behind MDPs. For example TAEMS task models have non-Markovian aspects, and TAEMS actions must be scheduled against global time. In this paper we describe both how we translate these \tems{} features into MDPs and how we cope with the state space explosion that can result. TAEMS problems are particularly interesting because they are goal-based, so the utility of outcomes cannot, in general, be determined until reaching the final state; they offer limited traction to approaches that exploit interim reward information. We describe several optimizations that make the planning problem easier, notably aggressive pruning of actions, collapsing together equivalent states in the state space, and lazy exploration of the state space.

Subjects: 3.4 Probabilistic Reasoning; 7.1 Multi-Agent Systems

Submitted: Jan 26, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.