Proceedings:
No. 1: Agents that Learn from Human Teachers
Volume
Issue:
Papers from the 2009 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
As robots become more commonplace within society, the need for tools that enable non-robotics-experts to develop control algorithms, or policies, will increase. Learning from Demonstration (LfD) offers one promising approach, where the robot learns a policy from teacher task executions. In this work we present an algorithm that incorporates human teacher feedback to enable policy improvement from learner experience within an LfD framework. We present two implementations of this algorithm, that differ in the sort of teacher feedback they provide. In the first implementation, called Binary Critiquing (BC), the teacher provides a binary indication that highlights poorly performing portions of the execution. In the second implementation, called Advice-Operator Policy Improvement (A-OPI), the teacher provides a correction on poorly performing portions of the student execution. Most notably, these corrections are continuous-valued and appropriate for low level motion control action spaces. The algorithms are applied to simulated and real robot validation domains. For both, policy performance is found to improve with teacher feedback. Specifically, with BC learner execution success and efficiency come to exceed teacher performance. With A-OPI task success and accuracy are shown to be similar or superior to the typical LfD approach of correcting behavior through more teacher demonstrations
Spring
Papers from the 2009 AAAI Spring Symposium