Improving Search with Supervised Learning in Trick-Based Card Games

Authors

  • Christopher Solinas University of Alberta
  • Douglas Rebstock University of Alberta
  • Michael Buro University of Alberta

DOI:

https://doi.org/10.1609/aaai.v33i01.33011158

Abstract

In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network — trained on data from human gameplay — in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.

Downloads

Published

2019-07-17

How to Cite

Solinas, C., Rebstock, D., & Buro, M. (2019). Improving Search with Supervised Learning in Trick-Based Card Games. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1158-1165. https://doi.org/10.1609/aaai.v33i01.33011158

Issue

Section

AAAI Technical Track: Applications