AI for Explaining Decisions in Multi-Agent Environments

Authors

  • Sarit Kraus Bar-Ilan University
  • Amos Azaria Ariel University
  • Jelena Fiosina TU Clausthal
  • Maike Greve Georg-August-Universitat Gottingen
  • Noam Hazon Ariel University
  • Lutz Kolbe Georg-August-Universitat Gottingen
  • Tim-Benjamin Lembcke Georg-August-Universitat Gottingen
  • Jorg P. Muller Clausthal TU
  • Soren Schleibaum Clausthal TU
  • Mark Vollrath TU Btaunschweig

DOI:

https://doi.org/10.1609/aaai.v34i09.7077

Abstract

Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: Explainable decisions in Multi-Agent Environments (xMASE). We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI systems' decisions in multi-agent environments.

Downloads

Published

2020-04-03

How to Cite

Kraus, S., Azaria, A., Fiosina, J., Greve, M., Hazon, N., Kolbe, L., Lembcke, T.-B., Muller, J. P., Schleibaum, S., & Vollrath, M. (2020). AI for Explaining Decisions in Multi-Agent Environments. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13534-13538. https://doi.org/10.1609/aaai.v34i09.7077

Issue

Section

Senior Member Presentation Track: Blue Sky Papers