An Architecture for the Learning of Perceptually Grounded Word Meanings

Michael Gasser and Eliana Colunga

In this statement, we discuss two kinds of properties that a grounded model of the learning of word meaning should have, those related to the interaction of linguistic and non-linguistic categories and processes and those related to the representational demands placed on such a model. We also introduce Playpen, a neural network architecture which has these properties. A grounded model of the learning of word meaning should allow for a variety of possible interactions between linguistic and non-linguistic categories and processes. Because Playpen imposes no direction on processing and learning, it accommodates all such interactions. A grounded model should also build on an account of what form perceptual categories take, how they take on that form, and how they are associated with words. Playpen solves these problems through the use of two kinds of primitive units, those representing primitive object features and those representing primitive relation features, and a simple correlational learning algorithm.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.