Track:
Contents
Downloads:
Abstract:
One of the principle issues in multiple agent systems is how to treat the judgments of the agents in those systems: should they be combined or treated separately? If the judgments are "substantially different" then that likely signals different models being employed by the agents. As a result, if the experts’ judgments are disparate, then it is unlikely that the judgments should be combined. However, developers of multiple agent systems have combined substantially different judgments by averaging. Such a combination is likely to provide a composite judgment that is inconsistent with each individual judgment. An important aspect of verification and validation of multiple agent systems is the analysis of the combination of such judgments. Thus, a critical issue in multiple agent systems is determining whether or not the judgments of the experts are similar or disparate. As a result, the purpose of this paper is to investigate the combination of probability judgments in multiple agent systems. Traditional statistics are used to investigate whether or not different judgments are substantially different. In addition, a new approach is developed to determine if probability distributions of agents are similar enough to combine or disparate enough to treat separately. A case study is used to illustrate the problems of combining multiple agent systems and to demonstrate the new approach.