Perpetual Learning for Non-Cooperative Multiple Agents

Luke Dickens

This paper examines, by argument, the dynamics of sequences of behavioural choices made, when non-cooperative restricted-memory agents learn in partially observable stochastic games. These sequences of combined agent strategies (joint-policies) can be thought of as a walk through the space of all possible joint-policies. We argue that this walk, while containing random elements, is also driven by each agent's drive to improve their current situation at each point, and posit a learning pressure field across policy space to represent this drive. Different learning choices may skew this learning pressure, and affect the simultaneous joint learning of multiple agents.

Subjects: 12. Machine Learning and Discovery; 7.1 Multi-Agent Systems

Submitted: Apr 8, 2008


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.