Eye Finding via Face Detection for a Foveated Active Vision System

Brian Scassellati

Eye finding is the first step toward building a machine that can recognize social cues, like eye contact and gaze direction, in a natural context. In this paper, we present a real-time implementation of an eye finding algorithm for a foveated active vision system. The system uses a motion-based prefilter to identify potential face locations. These locations are analyzed for faces with a template-based algorithm developed by Sinha (1996). Detected faces are tracked in real time, and the active vision system saccades to the face using a learned sensorimotor mapping. Once gaze has been centered on the face, a high-resolution image of the eye can be captured from the foveal camera using a self-calibrated peripheral-to-foveal mapping. We also present a performance analysis of Sinha’s ratio template algorithm on a standard set of static face images. Although this algorithm performs relatively poorly on static images, this result is a poor indicator of real-time performance of the behaving system. We find that our system finds eyes in 94% of a set of behavioral trials. We suggest that alternate means of evaluating behavioral systems are necessary.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.