A Multimodal Human-Computer Interface for the Control of a Virtual Environment

Gregory Berry, Vladimir Pavlovic, and Thomas Huang

To further the advances in Human Computer Intelligent Interaction (HCII), we employ an approach to integrate two modes of human-computer communication to control a virtual environment. By using auditory and visual modes in the form of speech and gesture recognition, we outline the control of a task specific virtual environment without the need for traditional large scale virtual reality (VR) interfaces such a wand, mouse, or keyboard. By using features from both speech and gesture, a unique interface is created where different modalities complements each other in a more "human" communication style.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.