Evolutionary Computing in Cooperative Multiagent Environments

Lawrence Bull and Terence C. Fogarty

The fields of Artificial Intelligence and Artificial Life have both focused on complex systems in which agents must cooperate to achieve certain goals. In our work we examine the performance of the genetic algorithm when applied to systems of this type. That is, we examine the use of population-based evolutionary computing techniques within cooperative multi-agent environments. In extending the genetic algorithm to such environments we introduce three macro-level operators to reduce the amount of knowledge required a priori; the joining of agents (symbiogenesis), the transfer of genetic material between agents and the speciation of initially homogeneous agents. These operators are used in conjunction with a generic rule-based framework, a simplified version of Pittsburgh-style classifier systems, which we alter to allow for direct systemic communication to evolve between the thus represented agents. In this paper we use a simulated trail following task to demonstrate these techniques, finding that they can give improved performance.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.