Track:
All Papers
Downloads:
Abstract:
Developing and testing intelligent agents is a complex task that is both time-consuming and costly. This creates the potential that problems in the agent's behavior will be realized only after the agent has been put to use. In this paper we explore two implementations of a generic agent self-assessment framework applied to the Soar agent architecture. Our system extends previous work and can be used to achieve adjustable levels of agent autonomy or runtime verification with only minor modifications to existing Soar agents. We present results indicating the computational overhead of both approaches compared against an agent that exhibits identical behavior without the help of the self-assessment framework.