First-Order Markov Decision Processes

Matthew Greig

Markov Decision Processes (MDPs) [7] have developed lately as a standard method for representing uncertainty in decision-theoretic planning. Traditional MDP solution techniques have the drawback that they require an explicit state space, limiting their applicability to real-world problems due to the large number of world states occurring in such problems. Recent work addresses this drawback via compactly specifying the state-space in factored form as the set of possible assignments to a set of statevariables [1]. Such Factored MDPs (FMDPs) allow for easily representing exponentially large state spaces. The algorithms for planning using MDPs, however, still run in time polynomial in the size of the statespace or exponential in the number of state-variables, and methods for taking advantage of the structure of the state-space in planning with FMDPs have been proposed in the literature [1] [3].


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.