The ability to use a mirror as an instrument for spatial reasoning enables an agent to make meaningful inferences about the positions of objects in space based on the appearance of their reflections in mirrors. The model presented in this paper enables a robot to infer the perspective from which objects reflected in a mirror appear to be observed, allowing the robot to use this perspective as a virtual camera. Prior work by our group presented an architecture through which a robot learns the spatial relationship between its body and visual sense, mimicking an early form of self-knowledge in which infants learn about their bodies and senses through their interactions with each other. In this work, this self-knowledge is utilized in order to determine the mirror's perspective. Witnessing the position of its end-effector in a mirror in several distinct poses, the robot determines a perspective that is consistent with these observations. The system is evaluated by measuring how well the robot's predictions of its end-effector's position in 3D, relative to the robot's egocentric coordinate system, and in 2D, as projected onto it's cameras, match measurements of a marker tracked by its stereo vision system. Reconstructions of the 3D position end-effector, as computed from the perspective of the mirror, are found to agree with the forward kinematic model within a mean of 31.55mm. When observed directly by the robot's cameras, reconstructions agree within 5.12mm. Predictions of the 2D position of the end-effector in the visual field agree with visual measurements within a mean of 18.47 pixels, when observed in the mirror, or 5.66 pixels, when observed directly by the robot's cameras.