Reproductive perception is a novel approach to perception. It is based on the assumption that perception is predominantly a generative process, i.e., that models that represent hypotheses about the current state of the environment generate so-called pseudo-sensor data, which is matched against the actual sensor data to determine and improve the correctness of these hypotheses. This is in contrast to the view that perception is mainly a reductive process where large amounts of sensor data are processed until a compact representation is achieved. Several successful examples of using this approach for spatial world-modeling are presented here. The first one deals with a robot arm and a camera to learn eye hand coordination. In the second example, the successful detection and 3D localization of humans in single 2D thermal images is described. In the last but not least example, work in progress on the generation and usage of compact 3D models of unstructured environments on a mobile robot is presented. The reproductive perception approach in all three examples does not only influence the way the spatial knowledge is generated and represented, but also its usage especially with respect to the classification and recognition of objects.