Value Functions for RL-Based Behavior Transfer: A Comparative Study

Matthew E. Taylor, Peter Stone, Yaxin Liu

Temporal difference (td) learning methods have become popular reinforcement learning techniques in recent years. td methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but have often been found slow in practice. This paper presents methods for further generalizing across tasks, thereby speeding up learning, via a novel form of behavior transfer. We compare learning on a complex task with three function approximators, a CMAC, a neural network, and an RBF, and demonstrate that behavior transfer works well with all three. Using behavior transfer, agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup-soccer keepaway domain.

Content Area: 12. Machine Learning

Subjects: 12.1 Reinforcement Learning; 7.1 Multi-Agent Systems

Submitted: May 11, 2005


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.