Vision and space are prominent modalities in our experiences as humans. We live in a richly visual world, and are constantly and acutely aware of our position in space and our surroundings. In contrast to this seemingly precise awareness, we are also able to reason abstractly, use language, and construct arbitrary hypothetical scenarios. In this position paper, we present an AI system we are building to work towards human capability in visuospatial processing. We use mental imagery processing as our psychological basis and integrate it with symbolic processing. To design this system, we are considering constraints from the natural world (as described by psychology and neuroscience), and those uncovered by AI research. In doing so, we hope to address the gap between abstract reasoning and detailed perception.