Abstract:
This paper considers whether or not the internals of NLP systems can be a black box with respect to the modeling of how humans process language in answer to the question “Is cognitive science relevant to AI problems?” Is it sufficient to model the input/output behavior using computational techniques which bear little resemblance to human language processing or is it necessary to model the internals of human language processing behavior in NLP systems? The basic conclusion is that it is important to look inside the black box of the human language processor and to model that behavior at a lower level of abstraction than input/output behavior. The development of functional NLP systems may actually be facilitated, not hindered, by adoption of cognitive constraints on how humans process language. The relevance of this position for the symposium is considered and some suggestions for moving forward are presented.