Analyzing Human Trust of Autonomous Systems in Hazardous Environments

Daniel P. Stormont

Autonomous systems are becoming more prevalent in our everyday lives. From agent software that collects news for us or bids for us in online auctions to robots that vacuum our floors and may help care for us when we grow old, autonomous agents promise to play a greater role in our lives in the future. Yet, there is ample evidence that many people do not trust autonomous systems — especially in environments where human lives may be put at risk. Examples include search-and-rescue operations at disaster sites, military operations in a combat zone, and caregiving scenarios where a human’s life depends on the care given by an autonomous system. This paper uses previous work on trust in multi-agent systems as a basis for examining the factors that influence the trust humans have in autonomous systems. Some of the technical and ethical implications of relying on autonomous systems (especially in combat areas) are considered. Some preliminary results from a simulation of a firefighting scenario where humans may need to rely upon robots are used to explore the effects of the different factors of human trust. Finally, directions for future research in human trust of autonomous systems are discussed.

Subjects: 6. Computer-Human Interaction; 17. Robotics

Submitted: May 7, 2008


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.