We have been exploring an approach to robot learning based on a hierarchy of types of knowledge of the robot’s senses, actions, and spatial environment. This approach grew out of a computational model of the human cognitive map that exploited the distinction between procedural, topological, and metrical knowledge of large-scale space [Kuipers, 1978, 1979, 1983]. More recently, Kuipers and Byun [1988, 1991] extended this semantic hierarchy approach to continuous sensorimotor interaction with a continuous environment, demonstrating the fundamental role of identification of "distinctive places" in robot spatial learning. Our current research extends the semantic hierarchy framework in three directions. (1) We are testing the hypothesis that the semantic hierarchy approach will scale up naturally from simulated to physical robots. In fact, we expect that it will significantly simplify the robot’s sensorimotor interaction with the world. (2) We are demonstrating how the semantic hierarchy, and the learned topological and metrical cognitive map, supports learning of motion control laws, leading incrementally from low-speed, friction-dominated motion to high-speed, momentum-dominated motion. (3) We are developing methods whereby a "tabula rasa" robot can explore and learn the properties of an initially uninterpreted sensorimotor system, to the point where it can define and execute control laws, identify distinctive places and paths, and hence reach the first level of the spatial semantic hierarchy. We describe progress toward these goals in the sections below. If these goals can be achieved, we will have formulated a comprehensive computational model of the representation, learning, and use of a substantial body of knowledge about space and action. In addition to the intrinsic value of this knowledge, the semantic hierarchy approach should be useful in modeling other domains.