Published:
2020-10-09
Proceedings:
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8
Volume
Issue:
Vol. 8 (2020): Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing
Track:
Full Papers
Downloads:
Abstract:
This paper proposes a heuristic algorithm for effectively summarizing the work of novice robot operators, e.g., ones recruited through crowdsourcing platforms, in search and rescue-like tasks. Such summaries can be used for many purposes, perhaps most notably for monitoring and evaluating an operator’s performance in settings where information gaps preclude automatic evaluation. The underlying idea of our method is dividing the task timeline into intervals, and extracting a subset of high-scoring and low-scoring segments within, using a heuristic scoring function. This results in a short effective summary of the operator’s work, based on which several other crowdworkers can evaluate her performance. The effectiveness of the proposed method was extensively evaluated and compared to a large set of alternative methods through a series of experiments in Amazon Mechanical Turk. The analysis of the results reveals that the proposed method outperforms all tested alternatives. Finally, we evaluate the performance one may achieve with the use of machine learning for predicting the operator’s performance in our domain. While this approach manages to reach a performance level similar to the one achieved with summaries, it requires an order-of-magnitude greater effort for training (measured in terms of crowdworkers time).
DOI:
10.1609/hcomp.v8i1.7468
HCOMP
Vol. 8 (2020): Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing
ISBN 978-1-57735-848-0