Track:
Contents
Downloads:
Abstract:
This paper presents and empirically compares two solutions to the problem of vision and self-localization on a mobile robot. In the commonly used particle filtering approach, the robot identifies regions in each image that correspond to landmarks in the environment. These landmarks are then used to update a probability distribution over the robot's possible poses. In the expectation-based approach, an expected view of the world is first constructed based on a prior camera pose estimate. This view is compared to the actual camera image to determine a corrected pose. This paper compares the accuracies of the two approaches on a test-bed domain, finding that the expectation-based approach yields a significantly higher overall localization accuracy than a state-of-the-art implementation of the particle filtering approach. This paper's contributions are an exposition of two competing approaches to vision and localization on a mobile robot, an empirical comparison of the two methods, and a discussion of the relative advantages of each method.