This paper examines the effect of various modalities of expression on the reliability of crowdsourced sentiment polarity judgments. A novel corpus of YouTube video reviews was created, and sentiment judgments were obtained via Amazon Mechanical Turk. We created a system for isolating text, video, and audio modalities from YouTube videos to ensure that annotators could only see the particular modality or modalities being evaluated. Reliability of judgments was assessed using Fleiss Kappa inter-annotator agreement values. We found that the audio only modality produced the most reliable judgments for video fragments and that across modalities video fragments are less ambiguous than full videos.
Published Date: 2015-11-12
Registration: ISBN 978-1-57735-740-7