Proceedings:
Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity
Volume
Issue:
Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity
Track:
Contents
Downloads:
Abstract:
Statistical techniques for NLP typically do not take advantage of existing domain knowledge and require large amounts of tagged tralnln$ data. This paper presents a partial remedy to these shortcomings by introducing a richer class of statistical models, graph. icai models, along with techniques for: (1) establishing the form of the model in this class that best describes a given set of training data, (2) estimating the parameters of graphical models from untagged data, (3) combining constraints formulated in propositional logic with those derived from training data to produce a graphical model, and (4) simultaneously resolving interdependent ambiguities. The paper also describes how these tools can be used to produce a broad-coverage lexicon represented as a probabilistic model, and presents a method for using such a lexicon to simultaneously disambiguate all words in a sentence.
Spring
Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity