Track:
Contents
Downloads:
Abstract:
It has previously been established that for Markov learning automata games, the game equilibria are exactly the optimal strategies. In this paper, we extend the game theoretic view of reinforcement learning to consider the implications for "group rationality" in the more general situation of learning when the the Markov property cannot be assumed. We show that for a general class of non-Markov decision processes, if actual return (Monte Carlo) credit assignment is used with undiscounted returns, we are still guaranteed the optimal observation-based policies will be game equilibria when using the standard "direct" reinforcement learning approaches, but if either discounted rewards or a temporal differences style of credit assignment method is used, this is not the case.