An Analysis of Non-Markov Automata Games: Implications for Reinforcement Learning

Mark D. Pendrith and Michael J. McGarity

It has previously been established that for Markov learning automata games, the game equilibria are exactly the optimal strategies. In this paper, we extend the game theoretic view of reinforcement learning to consider the implications for "group rationality" in the more general situation of learning when the the Markov property cannot be assumed. We show that for a general class of non-Markov decision processes, if actual return (Monte Carlo) credit assignment is used with undiscounted returns, we are still guaranteed the optimal observation-based policies will be game equilibria when using the standard "direct" reinforcement learning approaches, but if either discounted rewards or a temporal differences style of credit assignment method is used, this is not the case.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.