Individuals today discuss information and form judgements as crowds in online communities and platforms. "Wisdom of the crowd" arguments suggest that, in theory, crowds have the capacity to bring together diverse expertise, pooling distributed knowledge and thereby solving challenging and complex problems. This paper concerns one way that crowds might fall short of this ideal. A large body of research in the social psychology of small groups concerns the shared information bias, a tendency for group members to focus on common knowledge at the expense of rarer information which only one or a few individuals might possess. We investigated whether this well-known bias for small groups also impacts larger crowds of 30 participants working on Amazon’s Mechanical Turk. We found that crowds failed to adequately pool distributed facts; that they were partially biased in how they shared facts; and that individual perception of group decisions was unstable. Nonetheless, we found that aggregating individual reports from the crowd resulted in moderate performance in solving the assigned task.