Learning to Form Negotiation Coalitions in a Multiagent System

Leen-Kiat Soh and Costas Tsatsoulis

In a multiagent system where agents are peers and collaborate to achieve a global task or resource allocation goal, coalitions are usually formed dynamically from the bottom-up. Each agent has high autonomy and the system as a whole tends to be anarchic due to the distributed decision making process. In this paper, we present a negotiation-based coalition formation approach that learns in two different ways to improve the chance of a successful coalition formation. First, every agent evaluates the utility of its coalition candidates via reinforcement learning of past negotiation outcomes and behaviors. As a result, an agent assigns its task requirements differently based on what it has learned from its interactions with its neighbors in the past. Second, each agent uses a case-based reasoning (CBR) mechanism to learn useful negotiation strategies that dictate how negotiations should be executed. Furthermore, an agent also learns from its past relationship with a particular neighbor when conducting a negotiation with that neighbor. The collaborative learning behavior allows two negotiation partners to reach a deal more effectively, and agents to form better coalitions faster.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.