Although the field of natural language processing has made considerable strides in the automated processing of standard language, figurative (i.e., non-literal) language still causes great difficulty. Normally, when we understand human language we combine the meaning of individual words into larger units in a compositional manner. However, understanding figurative language often involves an interpretive adjustment to individual words. A complete model of language processing needs to account for the way normal word meanings can be profoundly altered by their combination. Although figurative language is common in naturally occurring language, we know of no previous quantitative analyses of this phenomenon. Furthermore, while certain types and tokens are used more frequently than others, it is unknown whether frequency of use interacts with processing load. This paper outlines our current research program exploring the functional and neural bases of figurative language through a combination of theoretical work, corpus analysis, and experimental techniques. Previous research seems to indicate that the cerebral hemispheres may process language in parallel, each with somewhat different priorities, ultimately competing to reach an appropriate interpretation. If this is indeed the case, an optimal architecture for automated language processing may need to include similar parallel-processing circuits.