Track:
Contents
Downloads:
Abstract:
This paper uses partially observable Markov decision processes (POMDP’s) as a basic framework for Multi-Agent planning. We distinguish three perspectives: first one is that of an omniscient agent that has access to the global state of the system, second one is the perspective of an individual agent that has access only to its local state, and the third one is the perspective of an agent that models the states of information of the other agents. We detail how the first perspective differs from the other two due to the partial observability. POMDP’s allow us to formally define the notion of optimal actions in each perspective, and to quantify the loss of performance due to partial observability, and possible gain in performance due to intelligent information exchange between the agents. As an example we consider the domain of agents in a distributed information network. There, agents have to decide how to route packets and how to share information with other agents. Though almost all routing protocols have been formulated based on detailed study of the functional parameters in the system, there has been no clear formal representation for optimality. We argue that the various routing protocols should fall out as different approximations to policies (optimal solutions) in such a framework. Our approach also proves critical and useful for the computation of error bounds due to approximations used in practical routing algorithms. Each routing protocol is a conditional plan that involves physical actions, which change the physical state of the system, and actions that explicitly exchange information.