AAAI Publications, Workshops at the Twenty-Fourth AAAI Conference on Artificial Intelligence

Font Size: 
Automatic Methods for Continuous State Space Abstraction
Steven Loscalzo, Robert Wright

Last modified: 2010-07-07


Reinforcement learning algorithms are often tasked with learning an optimal control policy in a continuous state space. Since it is infeasible to learn the optimal action to take for every possible observation in a continuous state space, use- ful abstractions of the space must be constructed and subse- quently learned on. Abstraction techniques that generalize the space into very few abstract states must take care to avoid creating an abstraction that prevents learning the optimal policy. Many commonly used abstractions, such as CMAC can take considerable effort to tune to ensure a learnable abstraction is created. In this work we propose three methods that derive state abstractions automatically, in part by making use of the dimensionality reduction capability of the RL-SANE algorithm. We show that abstractions derived from these automatic methods can allow a learning algorithm to converge to the optimal policy faster than with a fixed abstraction. Ad- ditionally, these techniques are able to break the space into very few abstract states, further facilitating rapid learning.

Full Text: PDF