Published:
2018-02-08
Proceedings:
Proceedings of the AAAI Conference on Artificial Intelligence, 32
Volume
Issue:
Thirty-Second AAAI Conference on Artificial Intelligence 2018
Track:
AAAI Technical Track: Machine Learning
Downloads:
Abstract:
Biased decision making by machine learning systems is increasingly recognized as an important issue. Recently, techniques have been proposed to learn non-discriminatory clas- sifiers by enforcing constraints in the training phase. Such constraints are either non-convex in nature (posing computational difficulties) or don’t have a clear probabilistic interpretation. Moreover, the techniques offer little understanding of the more subjective notion of fairness. In this paper, we introduce a novel technique to achieve non-discrimination without sacrificing convexity and probabilistic interpretation. Our experimental analysis demonstrates the success of the method on popular real datasets including ProPublica’s COMPAS dataset. We also propose a new notion of fairness for machine learning and show that our technique satisfies this subjective fairness criterion.
DOI:
10.1609/aaai.v32i1.11662
AAAI
Thirty-Second AAAI Conference on Artificial Intelligence 2018
ISSN 2374-3468 (Online) ISSN 2159-5399 (Print)
Published by AAAI Press, Palo Alto, California USA Copyright © 2018, Association for the Advancement of Artificial Intelligence All Rights Reserved.