Is Belief Revision Harder Than You Thought?

Marianne Winslett

Suppose one wishes to construct, use, and maintain a knowledge base (KB) of beliefs about the real world, even though the facts about that world are only partially known. In the AI domain, this problem arises when an agent has a base set of beliefs that reflect partial knowledge about the world, and then tries to incorporate new, possibly contradictory knowledge into this set of beliefs. We choose to represent such a KB as a logical theory, and view the models of the theory as representing possible states of the world that are consistent with the agent’s beliefs. How can new information be incorporated into the KB? For example, given the new information that "b or c is true," how can one get rid of all outdated information about b and c, add the new information, and yet in the process not disturb any other information in the KB? The burden may be placed on the user or other omniscient authority to determine exactly what to add and remove from the KB. But what' s really needed is a way to specify the desired change intensionally, by stating some well-formed formula that the state of the world is now known to satisfy and letting the KB algorithms automatically accomplish that change. This paper explores a technique for updating KBs containing incomplete extensional information. Our approach embeds the incomplete KB and the incoming information in the language of mathematical logic. We present semantics and algorithms for our operators, and discuss the computational complexity of the algorithms. We show that the incorporation of new information is difficult even without the problems associated with justification of prior conclusions and inferences and identification of outdated inference rules and axioms.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.