Published:
2020-10-09
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8
Volume
Issue:
Vol. 8 (2020): Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
Recent advances in machine learning have led to the widespread adoption of ML models for decision support systems. However, little is known about how the introduction of such systems affects the behavior of human stakeholders. This pertains both to the people using the system, as well as those who are affected by its decisions. To address this knowledge gap, we present a series of ultimatum bargaining game experiments comprising 1178 participants. We find that users are willing to use a black-box decision support system and thereby make better decisions. This translates into higher levels of cooperation and better market outcomes. However, because users under-weigh algorithmic advice, market outcomes remain far from optimal. Explanations increase the number of unique system inquiries, but users appear less willing to follow the system’s recommendation. People who negotiate with a user who has a decision support system, but cannot use one themselves, react to its introduction by demanding a better deal for themselves, thereby decreasing overall cooperation levels. This effect is largely driven by the percentage of participants who perceive the system’s availability as unfair. Interpretability mitigates perceptions of unfairness. Our findings highlight the potential for decision support systems to further human cooperation, but also the need for regulators to consider heterogeneous stakeholder reactions. In particular, higher levels of transparency might inadvertently hurt cooperation through changes in fairness perceptions.
DOI:
10.1609/hcomp.v8i1.7462
HCOMP
Vol. 8 (2020): Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-848-0