Sham Kakade, Michael Kearns, and John Langford

Sham Kakade, Michael Kearns, and John Langford

We present metric-E3 , a provably near-optimal algorithm for reinforcement learning in Markov decision processes in which there is a natural metric on the state space that allows the construction of accurate local models. The algorithm is a generalization of the E3 algorithm of Kearns and Singh, and assumes a black box for approximate planning. Unlike the original E3, metric-E3 finds a near optimal policy in an amount of time that does not directly depend on the size of the state space, but instead depends on the covering number of the state space. Informally, the covering number is the number of neighborhoods required for accurate local modeling.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.