A recent trend in the cognitive sciences is the development of models of language acquisition in which word meaning is grounded in the learner's perceptions and actions. Such physical descriptions of meaning are inadequate for many verbs, however, because of the ambiguous nature of intentional action. We describe a model that addresses such ambiguities by explicitly representing the role of intention recognition in word learning. By augmenting this model with phrase boundary information, we show improvement in learning compared to the original syntax-free model. Greater relative improvement is found in learning verbs than nouns. Evaluations are performed using data collected in a virtual environment. Results highlight the importance of representing intentions in cognitive models and suggest a greater role for the representation of intentions in applied areas of Artificial Intelligence.