Track:
All Papers
Downloads:
Abstract:
Flexible general purpose robots need to tailor their visual processing to their task, on the fly. We propose a new approach to this within a planning framework, where the goal is to plan a sequence of visual operators to apply to the regions of interest (ROIs) in a scene. We pose the visual processing problem as a Partially Observable Markov Decision Process (POMDP). This requires probabilistic models of operator effects to quantitatively capture the unreliability of the processing actions, and thus reason precisely about trade-offs between plan execution time and plan reliability. Since planning in practical sized POMDPs is intractable we show how to ameliorate this intractability somewhat for our domain by defining a hierarchical POMDP. We compare the hierarchical POMDP approach with a Continual Planning (CP) approach. On a real robot visual domain, we show empirically that all the planning methods outperform naive application of all visual operators. The key result is that the POMDP methods produce more robust plans than either naive visual processing or the CP approach. In summary, we believe that visual processing problems represent a challenging and worthwhile domain for planning techniques, and that our hierarchical POMDP based approach to them opens up a promising new line of research.