Track:
All Papers
Downloads:
Abstract:
In the paper, we investigate the use of reinforcement learning in CBR for estimating and managing a legacy case base for playing the game of Tetris. Each case corresponds to a local pattern describing the relative height of a subset of columns where pieces could be placed. We evaluate these patterns through reinforcement learning to determine if significant performance improvement can be observed. For estimating the values of the patterns, we compare Q-learning with a simpler temporal difference formulation. Our results indicate that training without discounting provides slightly better results than other evaluation schemes. We also explore how the reinforcement values of the patterns can help reduce the size of the case base. We report on experiments we conducted for forgetting cases.