Autonomous Inter-Task Transfer in Reinforcement Learning Domains

Matthew E. Taylor

In reinforcement learning (RL) problems, agents take sequential actions with the goal of maximizing a reward signal, which may be time-delayed. In recent years RL tasks have been gaining in popularity as learning methods able to handle complex problems. RL algorithms, unlike many machine learning approaches, do not require correctly labeled training examples and thus may address a wide range of difficult and interesting problems. If RL agents begin their learning tabula rasa, mastering tasks may be slow or infeasible. A significant amount of current research in RL thus focuses on improving the speed of learning by exploiting domain expertise with varying degrees of autonomy. My thesis will examine one such general method for speeding up learning: transfer learning. In transfer learning problems, a source task can be used to improve performance on, or speed up learning in, a target task. An agent may thus leverage experience from an earlier task to learn the current task. A common formulation of this problem presents an agent with a pair of tasks and the agent is told explicitly to train on one before the other. Alternately, in the spirit of multitask learning or lifelong learning, an agent could consult a library of past tasks that it has mastered and transfer knowledge from one or more of them to speed up the current task.

Subjects: 12.1 Reinforcement Learning; 12. Machine Learning and Discovery

Submitted: Apr 23, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.