Proceedings:
Representing Mental States and Mechanisms
Volume
Issue:
Representing Mental States and Mechanisms
Track:
Contents
Downloads:
Abstract:
This paper focuses upon the level of granularity at which representations for the mental world should be placed. That is, if one wishes to represent thinking about the self, about the states and processes of reasoning, at what level of detail should one attempt to declaratively capture the contents of thought? Some claim that a mere set of two mental primitives are sufficient to represent the utterances of humans concerning verbs of thought such as "I forgot her name." Alternatively, many in the artificial intelligence community have built systems that record elaborate traces of reasoning, keep track of knowledge dependencies or inference, or encode much metaknowledge concerning the structure of internal rules and defaults. The position here is that the overhead involved with a complete trace of mental behavior and knowledge structures is intractable and does not reflect a reasonable capacity as possessed by humans. Rather, a system should be able instead to capture enough details to represent a common set of reasoning failures. I represent a number of examples with such a level of granularity and describe what such representations offer an intelligent system. This capacity will enable a system to reason about itself so as to learn from its reasoning failures, changing its background knowledge to avoid repetition of the failure. Two primitives are not sufficient for this task.
Spring
Representing Mental States and Mechanisms