Investigating Reinforcement Learning in Multiagent Coalition Formation

Xin Li and Leen-Kiat Soh

In this paper we investigate the use of reinforcement learning to address the multiagent coalition formation problem in dynamic, uncertain, real-time, and noisy environments. To adapt to the complex environmental factors, we equip each agent with the case-based reinforcement learning ability which is the integration of case-based reasoning and reinforcement learning. The agent can use case-based reasoning to derive a coalition formation plan in a real-time manner based on the past experience, and then instantiate the plan adapting to the dynamic and uncertain environment with the reinforcement learning on coalition formation experience. In this paper we focus on describing multiple aspects of the application of reinforcement learning in multiagent coalition formation. We classify two types of reinforcement learning: case-oriented reinforcement learning and peerrelated reinforcement learning, corresponding to strategic, off-line learning scenario and tactical, online learning scenario respectively. An agent might learn about the others’ joint or individual behavior during coalition formation, as a result, we identify them as joint-behavior reinforcement learning and individual-behavior reinforcement learning. We embed the learning approach in a multi-phase coalition formation model and have implemented the approach.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.