Shapley Q-Value: A Local Reward Approach to Solve Global Reward Games

Authors

  • Jianhong Wang Imperial College London
  • Yuan Zhang Laiye Network Technology Co.Ltd.
  • Tae-Kyun Kim Imperial College London
  • Yunjie Gu Imperial College London

DOI:

https://doi.org/10.1609/aaai.v34i05.6220

Abstract

Cooperative game is a critical research area in the multi-agent reinforcement learning (MARL). Global reward game is a subclass of cooperative games, where all agents aim to maximize the global reward. Credit assignment is an important problem studied in the global reward game. Most of previous works stood by the view of non-cooperative-game theoretical framework with the shared reward approach, i.e., each agent being assigned a shared global reward directly. This, however, may give each agent an inaccurate reward on its contribution to the group, which could cause inefficient learning. To deal with this problem, we i) introduce a cooperative-game theoretical framework called extended convex game (ECG) that is a superset of global reward game, and ii) propose a local reward approach called Shapley Q-value. Shapley Q-value is able to distribute the global reward, reflecting each agent's own contribution in contrast to the shared reward approach. Moreover, we derive an MARL algorithm called Shapley Q-value deep deterministic policy gradient (SQDDPG), using Shapley Q-value as the critic for each agent. We evaluate SQDDPG on Cooperative Navigation, Prey-and-Predator and Traffic Junction, compared with the state-of-the-art algorithms, e.g., MADDPG, COMA, Independent DDPG and Independent A2C. In the experiments, SQDDPG shows a significant improvement on the convergence rate. Finally, we plot Shapley Q-value and validate the property of fair credit assignment.

Downloads

Published

2020-04-03

How to Cite

Wang, J., Zhang, Y., Kim, T.-K., & Gu, Y. (2020). Shapley Q-Value: A Local Reward Approach to Solve Global Reward Games. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7285-7292. https://doi.org/10.1609/aaai.v34i05.6220

Issue

Section

AAAI Technical Track: Multiagent Systems