Expectation-Based Vision for Precise Self-Localization on a Mobile Robot

Daniel Stronger, Peter Stone

This paper presents and empirically compares two solutions to the problem of vision and self-localization on a mobile robot. In the commonly used particle filtering approach, the robot identifies regions in each image that correspond to landmarks in the environment. These landmarks are then used to update a probability distribution over the robot's possible poses. In the expectation-based approach, an expected view of the world is first constructed based on a prior camera pose estimate. This view is compared to the actual camera image to determine a corrected pose. This paper compares the accuracies of the two approaches on a test-bed domain, finding that the expectation-based approach yields a significantly higher overall localization accuracy than a state-of-the-art implementation of the particle filtering approach. This paper's contributions are an exposition of two competing approaches to vision and localization on a mobile robot, an empirical comparison of the two methods, and a discussion of the relative advantages of each method.

Subjects: 19.1 Perception; 17. Robotics

Submitted: May 31, 2006

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.