Collaborative Learning in Strategic Environments

Akira Namatame, Noriko Tanoura, and Hiroshi Sato

There is no presumption that collective behavior of interacting agents leads to collectively satisfactory results. How well agents can adapt to their social environment is different to how satisfactory a social environment they collectively create. In this paper, we attempt to probe a deeper understanding of this issue by specifying how agents interact by adapting their behavior. We consider the problems of asymmetric coordination, which are formulated as minority games, and we address the following question: how do interacting agents realize an efficient coordination without any central authority through self-organizing macroscopic orders from bottom up? We investigate several types of learning methodologies including anew model, give-and-take learning, in which agents yield to others if they gain and they randomize their actions if they lose or do not gain. We show that evolutionary learning is the most efficient in asymmetric strategic environments.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.