The verification of Extended Entity Relationship (EER) diagrams and other conceptual models that capture the design of information systems is crucial to ensure reliable systems. To scale up verification processes to larger groups of experts, Human Computation techniques were used focusing primarily on closed tasks, which constrain the number and variety of reported defects in favor of easy aggregation of derived judgements. To address this limitation of closed tasks, in this paper, we investigate EER verification (as instance of a broader family of model verification problems) with open tasks to extend the range of collected results. We also address the challenge of aggregating results of open tasks by proposing a follow-up HC task for defect validation. We evaluate our approach for HC-based EER Verification with open tasks in a set of experiments conducted with junior developers and show that (1) open tasks allow collecting a variety of insights that go beyond a manually built gold standard while still leading to good performance (F1=60%) and (2) HC-based validation can be reliably used for validating the results of open tasks (F1=84% compared to expert validation).