On Reinforcement Learning for Full-Length Game of StarCraft

Authors

  • Zhen-Jia Pang Nanjing University
  • Ruo-Ze Liu Nanjing University
  • Zhou-Yu Meng Nanjing University
  • Yi Zhang Nanjing University
  • Yang Yu Nanjing University
  • Tong Lu Nanjing University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014691

Abstract

StarCraft II poses a grand challenge for reinforcement learning. The main difficulties include huge state space, varying action space, long horizon, etc. In this paper, we investigate a set of techniques of reinforcement learning for the full-length game of StarCraft II. We investigate a hierarchical approach, where the hierarchy involves two levels of abstraction. One is the macro-actions extracted from expert’s demonstration trajectories, which can reduce the action space in an order of magnitude yet remain effective. The other is a two-layer hierarchical architecture, which is modular and easy to scale. We also investigate a curriculum transfer learning approach that trains the agent from the simplest opponent to harder ones. On a 64×64 map and using restrictive units, we train the agent on a single machine with 4 GPUs and 48 CPU threads. We achieve a winning rate of more than 99% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93% winning rate against the most difficult noncheating built-in AI (level-7) within days. We hope this study could shed some light on the future research of large-scale reinforcement learning.

Downloads

Published

2019-07-17

How to Cite

Pang, Z.-J., Liu, R.-Z., Meng, Z.-Y., Zhang, Y., Yu, Y., & Lu, T. (2019). On Reinforcement Learning for Full-Length Game of StarCraft. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4691-4698. https://doi.org/10.1609/aaai.v33i01.33014691

Issue

Section

AAAI Technical Track: Machine Learning