Track:
Contents
Downloads:
Abstract:
As automated systems become more complex, their propensity to fail in unexpected ways increases. As humans, we often notice their failures with the same ease that we recognize our own plans going awry. Yet the systems themselves are frequently oblivious that the function they are designed to perform is no longer being performed. This is because humans have explicit expectations -- about both the system's behavior and our own behaviors -- that allow us to notice an unexpected event. In this paper, we propose a way for AI systems to generate expectations about their own behavior, monitor them, and attempt to diagnose the underlying failures that cause them. Once a cause has been hypothesized, attempts at recovery can be made. The process is naturally meta-cognitive in that the system must reason about its own cognitive processes to arrive at an accurate and useful response. We present an architecture called the Meta-Cognitive Loop (MCL), which attempts to tackle robustness in cognitive systems in a domain general way, as a plug-in component to decrease the brittleness of AI systems.