Track:
Contents
Downloads:
Abstract:
Trust from the perspective of traditional social theory is a function of the cooperation promoted across a system of multiple human or artificial agents, assuring that conflict ends with a consensus of the facts drawn from reality, R. Overlooked is the downside of cooperation (e.g., invisibility of corruption, terrorist sleeper cells), and the reduction in computational power from the costs to communicate with an increasing number, N, of agents cooperating in an interaction, making the traditional model impractical for a large system of computational agents to solve difficult problems. In contrast to logical positivist models, quantizing the pro-con positions in decision-making may produce a robust model of argumentation that increases in computational power with N. Previously, optimum solutions of ill-defined problems, i d p ’s, were found to occur when incommensurable beliefs interacting before neutral decision makers generated sufficient emotion to process information, I, but insufficient to impair the interaction, unexpectedly producing more trust compared to cooperation. We extend this model to the first information density functional theory (IDFT) of groups.