HiPPo: Hierarchical POMDPs for Planning Information Processing and Sensing Actions on a Robot

Mohan Sridharan, Jeremy Wyatt, Richard Dearden

Flexible general purpose robots need to tailor their visual processing to their task, on the fly. We propose a new approach to this within a planning framework, where the goal is to plan a sequence of visual operators to apply to the regions of interest (ROIs) in a scene. We pose the visual processing problem as a Partially Observable Markov Decision Process (POMDP). This requires probabilistic models of operator effects to quantitatively capture the unreliability of the processing actions, and thus reason precisely about trade-offs between plan execution time and plan reliability. Since planning in practical sized POMDPs is intractable we show how to ameliorate this intractability somewhat for our domain by defining a hierarchical POMDP. We compare the hierarchical POMDP approach with a Continual Planning (CP) approach. On a real robot visual domain, we show empirically that all the planning methods outperform naive application of all visual operators. The key result is that the POMDP methods produce more robust plans than either naive visual processing or the CP approach. In summary, we believe that visual processing problems represent a challenging and worthwhile domain for planning techniques, and that our hierarchical POMDP based approach to them opens up a promising new line of research.

URL: http://www.cs.bham.ac.uk/~mzs/Papers/ICAPS08SridharanM27.pdf

Subjects: 1.11 Planning; 19. Vision

Submitted: Jun 26, 2008


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.