Robust Solutions in Stackelberg Games: Addressing Boundedly Rational Human Preference Models

Manish Jain, Fernando Ordonez, James Pita, Christopher Portway, Milind Tambe, Craig Western, Praveen Paruchuri, Sarit Kraus

Stackelberg games represent an important class of games in which one player, the leader, commits to a strategy and the remaining players, the followers, make their decision with knowledge of the leader's commitment. Existing algorithms for Bayesian Stackelberg games find optimal solutions while modeling uncertainty over follower types with an a-priori probability distribution. Unfortunately, in real-world applications, the leader may also face uncertainty over the follower's response which makes the optimality guarantees of these algorithms fail. Such uncertainty arises because the follower's specific preferences or the follower's observations of the leader's strategy may not align with the rational strategy, and it is not amenable to a-priori probability distributions. These conditions especially hold when dealing with human subjects. To address these uncertainties while providing quality guarantees, we propose three new robust algorithms based on mixed-integer linear programs (MILPs) for Bayesian Stackelberg games. A key result of this paper is a detailed experimental analysis that demonstrates that these new MILPs deal better with human responses: a conclusion based on 800 games with 57 human subjects as followers. We also provide run-time results on these MILPs.

Subjects: 7.1 Multi-Agent Systems; 1. Applications

Submitted: May 5, 2008

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.