AAAI Publications, Twenty-Second International FLAIRS Conference

Font Size: 
Generalizing and Categorizing Skills in Reinforcement Learning Agents Using Partial Policy Homomorphisms
Srividhya Rajendran, Manfred Huber

Last modified: 2009-03-16

Abstract


A reinforcement learning agent involved in life-long learning in a complex and dynamic environment has to have the ability to utilize control knowledge acquired in one situation in novel contexts. As part of this, it is important for the learning agent not only to be able to learn a new skill for a specific instance of a task but also to identify similar tasks, form a reusable skill and representational abstractions for the corresponding ''task type'', and to apply these abstractions in new, previously unseen contexts. This paper presents a new approach to policy generalization that derives an abstract policy for a set of similar tasks (a ''task type'') by constructing a partial policy homomorphism from a set of basic policies learned for previously seen task instances. The resulting generalized policy can then be applied in new contexts to address new instances of similar tasks. As opposed to many recent approaches in lifelong learning systems, this approach allows to identify similar tasks based on the functional characteristics of the corresponding skills and provides a means of transferring the learned knowledge to new situations without the need for complete knowledge of the state space and the system dynamics in the new environment.
To illustrate the new policy generalization method and to demonstrate its ability to reuse the gained knowledge in new contexts, it is applied to a set of grid world examples.

Full Text: PDF