Abstract:
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot control. We show how to use POMDPs differently, namely for sensor-planning in the context of behavior-based robot systems. This is possible because solutions of POMDPs can be expressed as policy graphs, which are similar to the finite state automata that behavior-based systems use to sequence their behaviors. An advantage of our system over previous POMDP navigation systems is that it is able to find close-to-optimal plans since it plans at a higher level and thus with smaller state spaces. An advantage of our system over behavior-based systems that need to get programmed by their users is that it can optimize plans during missions and thus deal robustly with probabilistic models that are initially inaccurate.
Published Date: May 2001
Registration: ISBN 978-1-57735-133-7
Copyright: Published by The AAAI Press, Menlo Park, California.