Efficient Reinforcement Learning with Relocatable Action Models

Bethany R. Leffler, Michael L. Littman, Timothy Edmunds

Realistic domains for learning possess regularities that make it possible to generalize experience across related states. This paper explores an environment-modeling framework that represents transitions as state-independent outcomes that are common to all states that share the same type. We analyze a set of novel learning problems that arise in this framework, providing lower and upper bounds. We single out one particular variant of practical interest and provide an efficient algorithm and experimental results in both simulated and robotic environments.

Subjects: 12.1 Reinforcement Learning; 17. Robotics

Submitted: Apr 24, 2007


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.