The meaning of words in everyday language depends on two very different kinds of relations. Onone hand, words refer to (are about) the world. This relation rests on causal interactionsbetween information and the physical world. On the other hand, agents use words to pursue goals by producing speech acts. A complete model of language must bridge these two kinds ofmeaning. These observations have motivated the implementation of a series of situated language processingsystems in my lab. I will report on my ongoing attempt to develop a computational framework forlanguage grounding that distills lessons learned from these implementations. Drawing from ideas in semiotics and constructivism, knowledge is represented in terms of signs which are causallyconnected to their referents, and actions which the agent can perform to verify, acquire, and useknowledge.