Proceedings:
No. 2 (2017): The Twenty-Ninth Innovative Applications of Artificial Intelligence Conference
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 31
Track:
IAAI Challenge Papers
Downloads:
Abstract:
As intelligent agents become more autonomous, sophisti- cated, and prevalent, it becomes increasingly important that humans interact with them effectively. Machine learning is now used regularly to acquire expertise, but common techniques produce opaque content whose behavior is difficult to interpret. Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices. We will refer to this general ability as explainable agency. This capacity for explaining decisions is not an academic exercise. When a self-driving vehicle takes an unfamiliar turn, its passenger may desire to know its reasons. When a synthetic ally in a computer game blocks a playerÕs path, he may want to understand its purpose. When an autonomous military robot has abandoned a high-priority goal to pursue another one, its commander may request justification. As robots, vehicles, and synthetic characters become more selfreliant, people will require that they explain their behaviors on demand. The more impressive these agentsÕ abilities, the more essential that we be able to understand them.
DOI:
10.1609/aaai.v31i2.19108
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 31