Task-Oriented Dialogs with Animated Agents in Virtual Reality

Jeff Rickel and W. Lewis Johnson

We are working towards animated agents that can carry on tutorial, task-oriented dialogs with human students. The agent’s objective is to help students learn to perform physical, procedural tasks, such as operating and maintaining equipment. Although most research on such dialogs has focused on verbal communication, nonverbal communication can play many important roles as well. To allow a wide variety of interactions, the student and our agent cohabit a threedimensional, interactive, simulated mock-up of the student’s work environment. The agent, Steve, can generate and recognize speech, demonstrate actions, use gaze and gestures, answer questions, adapt domain procedures to unexpected events, and remember past actions. This paper gives a brief overview of Steve’s methods for generating multi-modal behavior, contrasting our work with prior work in task-oriented dialogs and multi-modal explanation generation.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.