Track:
Contents
Downloads:
Abstract:
In a world inhabited by multiple agents, coordination is key to group as well as individual success. Coordination is essential for cooperative, indifferent, and even adversarial agents. As computer scientists, we are interested in developing computational mechanisms that are domain independent and robust in the presence of noisy, incomplete, and out-of-date information. Whereas previous research efforts looked at offline design of agent organizations, behavioral rules, negotiation protocols, etc., it was recognized that agents operating in open, dynamic environments must be able to flexibly adapt to changing demands and opportunities. To effectively utilize opportunities presented and avoid pitfalls, agents need to learn about other agents and adapt local behavior based on group composition and dynamics. Though prevalent supervised, unsupervised, and reinforcement learning techniques can be used as starting points for exploring effective learning techniques in multiagent situations, one needs to augment them to match environmental demands and agent characteristics.