What the Robot Sees and Understands Facilitates Dialog

John Zelek, Dave Bullock, Sam Bromley, and Haisheng Wu

The particular applications that interest us include search and rescue robotics, security, elder/disabled care or assistance, or service robotics. In all of these application areas, visual perception and internal representations play a pivotal role in how humans and robots communicate. In general, visual perception plays an important role in communicating information amongst humans. It is typically imprecise and humans usually use their cognitive abilities to interpret the intent. We explore real-time probabilistic visual perception in three different roles: (1) the tracking of human limbs; (2) stereo vision for robot and human navigation; and (3) optical flow for detecting salient events and structure from motion. Our visual perception efforts are expressed in probability distribution functions (i.e., Bayesian). The robot requires to have this uncertainty propagated for any subsequent decision-making task. A related application we are also exploring is using real-time stereo vision to convey depth information to a blind person via tactile feedback. We anticipate that this will provide us some glues for internal representations that can form the basis for human-robot communications. We propose that this representation be based on a minimal spanning basis set of spatial prepositions and show how it can be used as a basis for commands. We assume that uncertainty can be conveyed through the linguistic representation in a fuzzy descriptor.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.