Track:
Contents
Downloads:
Abstract:
This work is about the relevance of Gibson’s concept of affordances for visual perception in interactive and autonomous robotic systems. In extension to existing functional views on visual feature representations, we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. We investigate how the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized to using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information. In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. We argue that affordancelike perception should enable systems to react on environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses on more complex perceptual configurations. We verify the concept with a concrete implementation for affordance learning, applying state-of-the-art visual descriptors that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.