Published:
2021-11-14
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 9
Volume
Issue:
Vol. 9 (2021): Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Archival Papers
Downloads:
Abstract:
Data quality is a key concern for artificial intelligence (AI) efforts that rely on crowdsourced data collection. In the domain of medicine in particular, labeled data must meet high quality standards, or the resulting AI may perpetuate biases or lead to patient harm. What are the challenges involved in expert medical labeling? How do AI practitioners address such challenges? In this study, we interviewed members of teams developing AI for medical imaging in four subdomains (ophthalmology, radiology, pathology, and dermatology) about their quality-related practices. We describe one instance of low-quality labeling being caught by automated monitoring. The more proactive strategy, however, is to partner with experts in a collaborative, iterative process prior to the start of high-volume data collection. Best practices including 1) co-designing labeling tasks and instructional guidelines with experts, 2) piloting and revising the tasks and guidelines, and 3) onboarding workers enable teams to identify and address issues before they proliferate.
DOI:
10.1609/hcomp.v9i1.18940
HCOMP
Vol. 9 (2021): Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-872-5