From our perspective, modeling lexical development is an ideal task for studying embodied cognition. We can isolate linguistically and conceptually simple situations and construct and test very detailed models. Our first serious effort was Regier’s thesis. Regier  used a simplified but realistic connectionist model of the visual system plus a conventional back-propagation learning scheme in a demonstration of how some lexical items describing spatial relations might develop in different languages. The system was able to learn the appropriate terms for simple spatial relation scenarios from a set of labeled example movies. Since languages differ radically in how spatial relations are conceptualized, there was no obvious set of primitive features that could be built into the program. The key to Regier’s success came directly from embodiment - we know that all people have the same visual system and that visual concepts must all arise from the capabilities of this system. By building in a simple but realistic visual system model, Regier was able to have his program learn spatial terms from a wide range of languages using simple back-propagation techniques. We know of no other way that this kind of result could be achieved. The point of our current work is to show that this paradigm can be extended to deal with a wide range of central questions in cognitive science.