Multimodal Conversation between a Humanoid Robot and Multiple Persons

Maren Bennewitz, Felix Faber, Dominik Joho, Michael Schreiber, and Sven Behnke

Attracting people and involving multiple persons into an interaction is an essential capability for a humanoid robot. A prerequisite for such a behavior is that the robot is able to sense people in its vicinity and to know where they are located. In this paper, we propose an approach that maintains a probabilistic belief about people in the surroundings of the robot. Using this belief, the robot is able to memorize people even if they are currently outside its limited field of view. Furthermore, we use a technique to localize a speaker in the environment. In this way, even people who are currently not the primary conversational partners or who are not stored in the robot’s belief can attract its attention. To enrich human-robot interaction and to express how the robot changes its mood, we apply a technique to change its facial expressions. As we demonstrate in practical experiments, by integrating the presented techniques into its control architecture, our robot is able to interact with multiple persons in a multimodal way and to shift its attention between different people.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.