Proceedings:
Goal-Driven Learning
Volume
Issue:
Papers from the 1994 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
Bayesian approaches to reasoning under uncertainty have gained substantially in popularity recently, due to the efforts of AI researchers and statisticians to find tractable means of probability propagation in restricted domains, namely those properly described by causal (Bayesian) networks (cf. Pearl 1988 and Neapolitan 1990). Most Bayesians are satisfied with such an approach to automating Bayesian inference: it is normatively correct, occasionally tractable, and supplies precisely what is needed for Bayesian decision theory--the theory that supplies normatively correct means for decision making under uncertainty. Some, however, are concerned to find additional means for reasoning under uncertainty, particularly on the grounds that often causal networks are too complex given some pragmatic problem-solving context. The question is whether there might be some qualitative, computationally simpler, reasoning strategy that can be justified as a cooperative supplement to Bayesian inference.
Spring
Papers from the 1994 AAAI Spring Symposium