Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing (HCOMP-20)
Lora Aroyo and Elena Simperl, General Cochairs
Sponsored by the Association for the Advancement of Artificial Intelligence
A Virtual Conference, October 25–29 , 2020
Volume 8, Number 1 (2020)
Published by the AAAI Press, Palo Alto, California USA
Copyright © 2020,
Association for the Advancement of Artificial Intelligence
2275 East Bayshore Road, Suite 160 Palo Alto, California 94303
All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.
200 pp., illus, references.
ISBN 978-1-57735-848-0
Published 25 October 2020
HCOMP is aimed at promoting the scientific exchange of advances in human computation and crowdsourcing among researchers, engineers, and practitioners across a spectrum of disciplines. The conference was created by researchers from diverse fields to serve as a key focal point and scholarly venue for the review and presentation of the highest quality work on principles, studies, and applications of human computation. The meeting seeks and embraces work on human computation and crowdsourcing in multiple fields, including human-computer interaction, cognitive psychology, economics, information retrieval, databases, systems, optimization, and multiple subdisciplines of artificial intelligence, such as vision, speech, robotics, machine learning, and planning.
The 8th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2020) was held October 25–29th virtually at The Netherlands Institute for Sound and Vision in Hilversum, The Netherlands.).
Contents
Full Archival Papers
Trainbot: A Conversational Interface to Train Crowd Workers for Delivering On-Demand Therapy
Tahir Abbas, Vassilis-Javed Khan, Ujwal Gadiraju, Panos Markopoulos
Pages 3-12
Privacy-Preserving Face Redaction Using Crowdsourcing
Abdullah B. Alshaibani, Sylvia T. Carrell, Li-Hsin Tseng, Jungmin Shin, Alexander J. Quinn
Pages 13-22
CrowDEA: Multi-View Idea Prioritization with Crowds
Yukino Baba, Jiyi Li, Hisashi Kashima
Pages 23-32
Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content
Anubrata Das, Brandon Dang, Matthew Lease
Pages 33-42
Impact of Algorithmic Decision Making on Human Behavior: Evidence from Ultimatum Bargaining
Alexander Erlei, Franck Awounang Nekdem, Lukas Meub, Avishek Anand, Ujwal Gadiraju
Pages 43-52
How Context Influences Cross-Device Task Acceptance in Crowd Work
Danula Hettiachchi, Senuri Wijenayake, Simo Hosio, Vassilis Kostakos, Jorge Goncalves
Pages 53-62
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
Donald R. Honeycutt, Mahsan Nourani, Eric D. Ragan
Pages 63-72
Enhancing Collective Estimates by Aggregating Cardinal and Ordinal Inputs
Ryan Kemmer, Yeawon Yoo, Adolfo R. Escobedo, Ross Maciejewski
Pages 73-82
Understanding the Effects of Explanation Types and User Motivations on Recommender System Use
Qing Li, Sharon Lynn Chu, Nanjie Rao, Mahsan Nourani
Pages 83-91
Predicting Crowdworkers' Performance as Human-Sensors for Robot Navigation
Nir Machlev, David Sarne
Pages 92-101
Effective Operator Summaries Extraction
Ido Nimni, David Sarne
Pages 102-111
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani, Joanie T. King, Eric D. Ragan
Pages 112-121
Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training
Amy Rechkemmer, Ming Yin
Pages 122-131
Verifying Extended Entity Relationship Diagrams with Open Tasks
Marta Sabou, Klemens Käsznar, Markus Zlabinger, Stefan Biffl, Dietmar Winkler
Pages 132-140
Analyzing Workers Performance in Online Mapping Tasks Across Web, Mobile, and Virtual Reality Platforms
Gerard van Alphen, Sihang Qiu, Alessandro Bozzon, Geert-Jan Houben
Pages 141-149
Short Papers
Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection
Sohail Akhtar, Valerio Basile, Viviana Patti
Pages 151-154
Does Exposure to Diverse Perspectives Mitigate Biases in Crowdwork? An Explorative Study
Xiaoni Duan, Chien-Ju Ho, Ming Yin
Pages 155-158
The Challenges of Crowd Workers in Rural and Urban America
Claudia Flores-Saviaga, Yuwen Li, Benjamin V. Hanrahan, Jeffrey Bigham, Saiph Savage
Pages 159-162
Batch Prioritization of Data Labeling Tasks for Training Classifiers
Masanari Kimura, Kei Wakabayashi, Atsuyuki Morishima
Pages 163-167
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen, Ting-Hao (Kenneth) Huang
Pages 168-172
A Case for Soft Loss Functions
Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio
Pages 173-177
Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work
Xiaotong (Tone) Xu, Judith E. Fan, Steven P. Dow
Pages 178-182
AAAI Digital Library
AAAI relies on your generous support through membership and donations. If you find these resources useful, we would be grateful for your support.