Published:
2019-10-21
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7
Volume
Issue:
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
Track:
Technical Papers
Downloads:
Abstract:
Crowdsourcing plays a key role in developing algorithms for image recognition or captioning. Major datasets, such as MS COCO or Flickr30K, have been built by eliciting natural language descriptions of images from workers. Yet such elicitation tasks are susceptible to human biases, including stereotyping people depicted in images. Given the growing concerns surrounding discrimination in algorithms, as well as in the data used to train them, it is necessary to take a critical look at this practice. We conduct experiments at Figure Eight using a controlled set of people images. Men and women of various races are positioned in the same manner, wearing a grey t-shirt. We prompt workers for 10 descriptive labels, and consider them using the human-centric approach, which assumes reporting bias. We find that “what’s worth saying” about these uniform images often differs as a function of the gender and race of the depicted person, violating the notion of group fairness. Although this diversity in natural language people descriptions is expected and often beneficial, it could result in automated disparate impact if not managed properly.
DOI:
10.1609/hcomp.v7i1.5267
HCOMP
Vol. 7 (2019): Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-820-6