AAAI Publications, Workshops at the Twenty-Fourth AAAI Conference on Artificial Intelligence

Font Size: 
Evolutionary Tile Coding: An Automated State Abstraction Algorithm for Reinforcement Learning
Stephen Lin, Robert Wright

Last modified: 2010-07-07

Abstract


Reinforcement learning (RL) algorithms have the ability to learn optimal policies for control problems by exploring a domain's state space. Unfortunately, for most problems the size of the state space is too great for RL technologies to fully explore in order to find good policies. State abstraction is one way of reducing the size and complexity of a domain's state space in order to enable RL. In this paper we introduce a new approach for automatically deriving state abstractions called Evolutionary Tile Coding that uses a genetic algorithm for deriving effective tile codings. We provide an empirical analysis of the new algorithm comparing it to another adaptive tile coding method as well as fixed tile coding. Our results show that our approach is able to automatically derive effective state abstractions for two RL benchmark problems. Additionally, we present an intriguing result that shows the classical mountain car problem's state space can be reduced to just two states and still preserve the discovery of an optimal policy.

Full Text: PDF