TY - JOUR AU - Skantze, Gabriel PY - 2017/01/17 Y2 - 2024/03/28 TI - Real-Time Coordination in Human-Robot Interaction Using Face and Voice JF - AI Magazine JA - AIMag VL - 37 IS - 4 SE - Articles DO - 10.1609/aimag.v37i4.2686 UR - https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2686 SP - 19-31 AB - When humans interact and collaborate with each other, they coordinate their turn-taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviors. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures and hesitation sounds. ER -