Combinatorial Q-Learning for Dou Di Zhu

Authors

  • Yang You Shanghai Jiao Tong University
  • Liangwei Li Shanghai Jiao Tong University
  • Baisong Guo Shanghai Jiao Tong University
  • Weiming Wang Shanghai Jiao Tong University
  • Cewu Lu Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aiide.v16i1.7445

Abstract

Deep reinforcement learning (DRL) has gained a lot of attention in recent years, and has been proven to be able to play Atari games and Go at or above human levels. However, those games are assumed to have a small fixed number of actions and could be trained with a simple CNN network. In this paper, we study a special class of Asian popular card games called Dou Di Zhu, in which two adversarial groups of agents must consider numerous card combinations at each time step, leading to huge number of actions. We propose a novel method to handle combinatorial actions, which we call combinatorial Q-learning (CQL). We employ a two-stage network to reduce action space and also leverage order-invariant max-pooling operations to extract relationships between primitive actions. Results show that our method prevails over other baseline learning algorithms like naive Q-learning and A3C. We develop an easy-to-use card game environments and train all agents adversarially from sractch, with only knowledge of game rules and verify that our agents are comparative to humans. Our code to reproduce all reported results is available on github.com/qq456cvb/doudizhu-C.

Downloads

Published

2020-10-01

How to Cite

You, Y., Li, L., Guo, B., Wang, W., & Lu, C. (2020). Combinatorial Q-Learning for Dou Di Zhu. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 16(1), 301-307. https://doi.org/10.1609/aiide.v16i1.7445