AAAI Publications, Thirty-First AAAI Conference on Artificial Intelligence

Font Size: 
Dynamic Thresholding and Pruning for Regret Minimization
Noam Brown, Christian Kroer, Tuomas Sandholm

Last modified: 2017-02-10

Abstract


Regret minimization is widely used in determining strategies for imperfect-information games and in online learning. In large games, computing the regrets associated with a single iteration can be slow. For this reason, pruning — in which parts of the decision tree are not traversed in every iteration — has emerged as an essential method for speeding up iterations in large games. The ability to prune is a primary reason why the Counterfactual Regret Minimization (CFR) algorithm using regret matching has emerged as the most popular iterative algorithm for imperfect-information games, despite its relatively poor convergence bound. In this paper, we introduce dynamic thresholding, in which a threshold is set at every iteration such that any action in the decision tree with probability below the threshold is set to zero probability. This enables pruning for the first time in a wide range of algorithms. We prove that dynamic thresholding can be applied to Hedge while increasing its convergence bound by only a constant factor in terms of number of iterations. Experiments demonstrate a substantial improvement in performance for Hedge as well as the excessive gap technique.

Keywords


extensive-form game; equilibrium computation; regret minimization; convex optimization

Full Text: PDF