Track:
All Papers
Downloads:
Abstract:
Approximate linear programming (ALP) offers a promising framework for solving large factored Markov decision processes (MDPs) with both discrete and continuous states. Successful application of the approach depends on the choice of an appropriate set of feature functions defining the value function, and efficient methods for generating constraints that determine the convex space of the solution. The application of the ALP in continuous state-space settings poses an additional challenge — the number of constraints defining the problem is infinite. The objective of this work is to explore various heuristics for selecting a finite subset of constraints defining a good solution policy and for searching the space of such constraints more efficiently. The heuristics that we developed rely upon: (1) the structure of the factored model and (2) stochastic state simulations to generate an appropriate set of constraints. The improvements resulting from such heuristics are illustrated on three large factored MDP problems with continuous states.