Although many researchers feel that an autonomous system, capable of behaving appropriately in an uncertain environment, must have an internal representation (world model) of entities, events and situations it perceives in the world, research into active vision, inattentional amnesia and change blindness has implications for our views on the content of represented knowledge and raises issues concerning coupling knowledge held in the longer term with dynamically perceived sense data. This has implications for the type of formalisms we employ and implications for ontology. Importantly, in the case of the latter, evidence for the micro-structure of natural vision indicates that ontological description should perhaps be (task-related) feature oriented, rather than object-oriented. These issues are discussed in the context of existing work in developing autonomous agents for a simulated driving world. The view is presented that the reliability of represented knowledge guides information seeking and perhaps explains why some things get ignored.