A Flexible, Parallel Generator of Natural Language
My Ph.D. thesis (Ward 1992, 1991)1 addressed the task of generating natural language utterances. It was motivated by two difficulties in scaling up existing generators. Current generators only accept input that are relatively poor in information, such as feature structures or lists of propositions; they are unable to deal with input rich in information, as one might expect from, for example, an expert system with a complete model of its domain or a natural language understander with good inference ability. Current generators also have a very restricted knowledge of language -- indeed, they succeed largely because they have few syntactic or lexical options available (McDonald 1987) -- and they are unable to cope with more knowledge because they deal with interactions among the various possible choices only as special cases. To address these and other issues, I built a system called FIG (flexible incremental generator). FIG is based on a single associative network that encodes lexical knowledge, syntactic knowledge, and world knowledge. Computation is done by spreading activation across the network, supplemented with a small amount of symbolic processing. Thus, FIG is a spreading activation or structured connectionist system (Feldman et al. 1988).
Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.