FairyTED: A Fair Rating Predictor for TED Talk Data

Authors

  • Rupam Acharyya University of Rochester
  • Shouman Das University of Rochester
  • Ankani Chattoraj University of Rochester
  • Md. Iftekhar Tanveer Comcast Applied AI Research

DOI:

https://doi.org/10.1609/aaai.v34i01.5368

Abstract

With the recent trend of applying machine learning in every aspect of human life, it is important to incorporate fairness into the core of the predictive algorithms. We address the problem of predicting the quality of public speeches while being fair with respect to sensitive attributes of the speakers, e.g. gender and race. We use the TED talks as an input repository of public speeches because it consists of speakers from a diverse community and has a wide outreach. Utilizing the theories of Causal Models, Counterfactual Fairness and state-of-the-art neural language models, we propose a mathematical framework for fair prediction of the public speaking quality. We employ grounded assumptions to construct a causal model capturing how different attributes affect public speaking quality. This causal model contributes in generating counterfactual data to train a fair predictive model. Our framework is general enough to utilize any assumption within the causal model. Experimental results show that while prediction accuracy is comparable to recent work on this dataset, our predictions are counterfactually fair with respect to a novel metric when compared to true data labels. The FairyTED setup not only allows organizers to make informed and diverse selection of speakers from the unobserved counterfactual possibilities but it also ensures that viewers and new users are not influenced by unfair and unbalanced ratings from arbitrary visitors to the ted.com website when deciding to view a talk.

Downloads

Published

2020-04-03

How to Cite

Acharyya, R., Das, S., Chattoraj, A., & Tanveer, M. I. (2020). FairyTED: A Fair Rating Predictor for TED Talk Data. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 338-345. https://doi.org/10.1609/aaai.v34i01.5368

Issue

Section

AAAI Special Technical Track: AI for Social Impact