Published:
2020-10-09
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8
Volume
Issue:
Vol. 8 (2020): Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
While most user content posted on social media is benign, other content, such as violent or adult imagery, must be detected and blocked. Unfortunately, such detection is difficult to automate, due to high accuracy requirements, costs of errors, and nuanced rules for acceptable content. Consequently, social media platforms today rely on a vast workforce of human moderators. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to some moderators. To mitigate such harm, we investigate a set of blur-based moderation interfaces for reducing exposure to disturbing content whilst preserving moderator ability to quickly and accurately flag it. We report experiments with Mechanical Turk workers to measure moderator accuracy, speed, and emotional well-being across six alternative designs. Our key findings show interactive blurring designs can reduce emotional impact without sacrificing moderation accuracy and speed.
DOI:
10.1609/hcomp.v8i1.7461
HCOMP
Vol. 8 (2020): Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-848-0