Sensor interpretation (SI) involves determining abstract explanations for sensor data. SI differs in several significant ways from the kind of "diagnosis problems" that have been heavily studied within the belief network community. These differences lead to the need for approximate, satisficing problem-solving techniques in most real-world SI problems. Currently, there are no AI techniques with well understood properties that can apply a wide range of approximate SI strategies. In this paper we will examine the differences between SI and diagnosis that lead to the need for approximation, and discuss several approximation techniques. We will then consider the two main AI approaches to SI, blackboard systems and dynamic belief networks, and explore their deficiencies for SI. As a point of comparison, we will also consider techniques used by the target tracking community.