Model-based vision relies on the interpretation of observations of component features in order to identify an object. Computer vision systems observe features, which provide evidence for the existence of the whole object. However, events external to the vision sensor can produce misleading evidence which in turn can lead to false recognition. For example, in military target acquisition, the combination of the weather and the terrain frequently creates a reflection of the sun off a lake resulting in an observation with properties similar to a real target and leading to a false recognition. These "background effects" can be eliminated by the addition of contextual knowledge to enable the interpretation process to identify and eliminate false observations. Unfortunately, recognition systems which exploit contextual knowledge typically have two limitations. First, the manpower and expertise involved in anticipating and encoding the possible types of confounding effects is a major obstacle in building the requisite knowledge base. Second, the knowledge is usually embedded in the object model, resulting in an object and environment specific system that cannot be readily extended or transferred to new domains.