Toward Helpful Robot Teammates: A Simulation-Theoretic Approach for Inferring Mental States of Others

Jesse Gray and Cynthia Breazeal

As robots enter the human environment they must be able to communicate and cooperate with novice users. Towards this goal, understanding human nonverbal behavior is a critical skill. This includes not only recognizing human’s actions, but also inferring mental states from observable behavior. This capability would allow a robot to offer predictive and relevant assistance to a human. Simulation Theory argues in favor of an embodied approach for how humans infer mental states of others (e.g., intents, beliefs, affect, etc.). This theory argues that humans reuse parts of their cognitive structure used for generating behavior to simulate and detect the behavior of others. Inspired by this theory, we describe our simulation-theoretic approach and implementation that enables a robot to monitor the human by simulating their behavior within the robot’s own generative mechanisms on the motion, action, and perceptual levels. This grounds the robot’s information about the user in the robot’s own systems, allowing it to make inferences about the human’s goals and knowledge that are immediately useful for providing helpful behavior such as helping to complete an action or pointing out an occluded object. We feel that designing individual systems of the robot to allow for this type of dual use, and reusing them in this manner, is a powerful technique for designing robots that interact with humans.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.