Proceedings:
Book One
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 20
Track:
Doctoral Consortium
Downloads:
Abstract:
Reinforcement learning problems are commonly tackled with temporal difference methods, which use dynamic programming and statistical sampling to estimate the long-term value of taking each action in each state. In most problems of real-world interest, learning this value function requires a function approximator, which represents the mapping from state-action pairs to values via a concise, parameterized function and uses supervised learning methods to set its parameters. Function approximators make it possible to use temporal difference methods on large problems but, in practice, the feasibility of doing so depends on the ability of the human designer to select an appropriate representation for the value function. My thesis presents a new approach to function approximation that automates some of these difficult design choices by coupling temporal difference methods with policy search methods such as evolutionary computation. It also presents a particular implementation which combines NEAT, a neuroevolutionary policy search method, and Q-learning, a popular temporal difference method, to yield a new method called NEAT+Q that automatically learns effective representations for neural network function approximators. Empirical results in a server job scheduling task demonstrate that NEAT+Q can outperform both NEAT and Q-learning with manually designed neural networks.
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 20