Enhancing Collaboration in Computer-Administered Survey Interviews

Michael F. Schober, Frederick G. Conrad and Jonathan E. Bloom

We investigated the extent to which a collaborative view of human conversation transfers directly to interaction with non-human agents, and we examined how a collaborative view can improve user interface design. In two experiments we contrasted user-initiated and systeminitiated clarification in computer-administered surveys. In the first (text-based) study, users who could clarify the interpretations of questions by clicking on highlighted definitions comprehended questions more accurately (in ways that more closely fit the survey designers’ intentions) than users who couldn’t, and thus they provided more accurate responses. They were far more likely to ask for help when they had been instructed that clarification would be essential than when they were merely told that help was available. In the second (speechbased) Wizard-of-Oz study, users responded more accurately and asked more questions when they received unsolicited clarification about question meaning from the system in response to their linguistic cues of uncertainty (urns and uhs, restarts, talk other than an answer, etc.) than when they did not. The results suggest that clarification in collaborative systems will be successful only if users recognize that their own conceptions may differ from the system’s, and if they are willing to take extra turns to improve their understanding.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.