Track:
Contents
Downloads:
Abstract:
A partially observable Markov decision process (POMDP) provides an elegant model for problems of planning under uncertainty. Solving POMDPs is very computationally challenging, however, and improving the scalability of POMDP algorithms is an important research problem. One way to reduce the computational complexity of planning using POMDPs is by using state aggregation to reduce the (effective) size of the state space. State aggregation techniques that rely on a factored representation of a POMDP have been developed in previous work. In this paper, we describe similar techniques that do not rely on a factored representation. These techniques are simpler to implement and make this approach to reducing the complexity of POMDPs more general. We describe state aggregation techniques that allow both exact and approximate solution of non-factored POMDPs and demonstrate their effectiveness on a range of benchmark problems.