Proceedings:
No. 1: Agents that Learn from Human Teachers
Volume
Issue:
Papers from the 2009 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
This paper presents a learning framework that enables a robot to learn comprehensive policies autonomously from a series of incrementally more challenging tasks designed by a human teacher. Psychologists have shown that human infants rapidly acquire general strategies and then extend that behavior with contingencies for new situations. This strategy allows an infant to quickly acquire new behavior and then to refine it over time. The psychology literature calls such compensatory action prospective behavior and it has been identified as an important problem in robotics as well. In this paper, we provide an algorithm for learning prospective behavior to accommodate special-purpose situations that can occur when a general-purpose schema is applied to challenging new cases. The algorithm permits a robot to address complex tasks incrementally while reusing existing behavior as much as possible. First, we motivate prospective behavior in human infants and in common robotic tasks. We introduce an algorithm that searches for places in a schema where compensatory actions can effectively avoid predictable future errors. The algorithm is evaluated on a simple grid-world navigation problem. Results show that learning performance improves significantly over an equivalent flat learning formulation by re-using knowledge as appropriate and extending behavior only when necessary. We conclude with a discussion of where prospective repair of general-purpose behavior can play important roles in the development of behavior for effective human-robot interaction.
Spring
Papers from the 2009 AAAI Spring Symposium