On Measuring the Usefulness of Modeling in a Competitive and Cooperative Environment

Leonardo Garrido, Ramán Brena, and Katia Sycara

This paper presents recent results of our experimental work in quantifying exactly how useful is building models about other agents using no more than the observation of others’ behavior. The testbed we used in our experiments is an abstraction of the meeting scheduling problem, called the Meeting Scheduling Game, which has competitive as well as cooperative features. The agents are selfish, and use a rational decision theoretic approach based on the probabilistic models that the agent is learning. We view agent modeling as an iterative and gradual process, where every new piece of information about a particular agent is analyzed in such a way that the model of the agent is further refined. We present our Bayesian-modeler agent which updates his models about the others using a Bayesian updating mechanism. We propose a framework for measuring the performance of different modelling strategies and establish quantified lower and upper limits for the performance of any modeling strategy. Finally, we contrast the performances of a modeler from an individual and from a collective point of view, comparing the benefits for the modeler itself as well as for the group as a whole.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.