Proceedings:
Vol. 14 (2020): Fourteenth International AAAI Conference on Web and Social Media
Volume
Issue:
Vol. 14 (2020): Fourteenth International AAAI Conference on Web and Social Media
Track:
Full Papers
Downloads:
Abstract:
With the recent rise of toxicity in online conversations on social media platforms, using modern machine learning algorithms for toxic comment detection has become a central focus of many online applications. Researchers and companies have developed a variety of models to identify toxicity in online conversations, reviews, or comments with mixed successes. However, many existing approaches have learned to incorrectly associate non-toxic comments that have certain trigger-words (e.g. gay, lesbian, black, muslim) as a potential source of toxicity. In this paper, we evaluate several state-of-the-art models with the specific focus of reducing model bias towards these commonly-attacked identity groups. We propose a multi-task learning model with an attention layer that jointly learns to predict the toxicity of a comment as well as the identities present in the comments in order to reduce this bias. We then compare our model to an array of shallow and deep-learning models using metrics designed especially to test for unintended model bias within these identity groups.
DOI:
10.1609/icwsm.v14i1.7334
ICWSM
Vol. 14 (2020): Fourteenth International AAAI Conference on Web and Social Media