Proceedings:
Collaborative Learning Agents
Volume
Issue:
Papers from the 2002 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
In a multiagent system where agents are peers and collaborate to achieve a global task or resource allocation goal, coalitions are usually formed dynamically from the bottom-up. Each agent has high autonomy and the system as a whole tends to be anarchic due to the distributed decision making process. In this paper, we present a negotiation-based coalition formation approach that learns in two different ways to improve the chance of a successful coalition formation. First, every agent evaluates the utility of its coalition candidates via reinforcement learning of past negotiation outcomes and behaviors. As a result, an agent assigns its task requirements differently based on what it has learned from its interactions with its neighbors in the past. Second, each agent uses a case-based reasoning (CBR) mechanism to learn useful negotiation strategies that dictate how negotiations should be executed. Furthermore, an agent also learns from its past relationship with a particular neighbor when conducting a negotiation with that neighbor. The collaborative learning behavior allows two negotiation partners to reach a deal more effectively, and agents to form better coalitions faster.
Spring
Papers from the 2002 AAAI Spring Symposium