Proceedings:
Game Theoretic and Decision Theoretic Agents
Volume
Issue:
Papers from the 2007 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
Classically, an approach to the policy learning in multiagent systems supposed that the agents, via interactions and/or by using preliminary knowledge about the reward functions of all players, would find an interdependent solution called ``equilibrium''. Recently, however, certain researchers question the necessity and the validity of the concept of equilibrium as the most important multiagent solution concept. They argue that a ``good'' learning algorithm is one that is efficient with respect to a certain class of counterparts. Adaptive players is an important class of agents that learn their policies separately from the maintenance of the beliefs about their counterparts' future actions and make their decisions based on that policy and the current belief. In this paper we propose an efficient learning algorithm in presence of the adaptive counterparts called Adaptive Dynamics Learner (ADL) which is able to learn an efficient policy over the opponents' adaptive dynamics rather than over the simple actions and beliefs and, by so doing, to exploit this dynamics to obtain a higher utility than any equilibrium strategy can provide. We tested our algorithm on a big set of the most known and demonstrative matrix games and observed that ADL agent is highly efficient against Adaptive Play Q-learning (APQ) agent and Infinitesimal Gradient Ascent (IGA) agent. In self-play, when possible, ADL is able to converge to a Pareto optimal strategy that maximizes the welfare of all players instead of an equilibrium strategy.
Spring
Papers from the 2007 AAAI Spring Symposium