Proceedings:
Natural Language Generation in Spoken and Written Dialogue
Volume
Issue:
Papers from the 2003 AAAI Spring Symposium
Track:
Contents
Downloads:
Abstract:
Respondents to computer-administered surveys don’t always interpret ordinary words and phrases uniformly; clarification dialog can help improve uniformity interpretation, and thus survey data quality. Here we explore two approaches to the interfaces for web-based questionnaires that could increase the number of occasions when helpful clarification is provided. One approach is increase respondents’ sensitivity to the possibility conceptual misalignment by rewording questions so that they include part of the definition of the key concept(s). We found that under some circumstances this led increased requests for the full definition, suggesting that respondents recognize the potential for conceptual misalignment. Another approach is to build respondent models so that a mixed-initiative survey system can determine when respondents are confused and volunteer clarification. Such models can vary in their specificity from generic respondents to groups of respondents (stereotypic models) to models of individual respondents. Our data show both potential benefits and pitfalls to this approach, and they begin to map out the territory for dialog systems that collect data from users, as opposed to systems that provide data to users. Our data also begin to help determine the circumstances under which more sophisticated survey dialog systems with Natural Language generation and comprehension abilities are needed.
Spring
Papers from the 2003 AAAI Spring Symposium