From Transfer to Scaling: Lessons Learned in Understanding NovelReinforcement Learning Algorithms

Soumi Ray, Tim Oates

A major drawback of reinforcement learning (RL) is the slow learning rate. We are interested in speeding up RL. We first approached this problem with transfer learning where we have two domains. We developed a method to transfer knowledge from a completely trained RL domain to a partially trained related domain (where we want to speed up learning) and this helped increase the learning rate sufficiently. While trying to come up with a theoretical justification we found that our method of transfer of knowledge was actually scaling the Q-values, which was the main reason for the effects seen. We then scaled the Q-values with an appropriate scalar value in the RL domain after partial learning and saw similar results. Empirical results in a variety of grid worlds and a multi-agent block loading domain that is exceptionally difficult to solve using standard reinforcement learning algorithms show significant speedups in learning using scaling.

Subjects: 12. Machine Learning and Discovery; 12.1 Reinforcement Learning

Submitted: May 8, 2008

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.