Analogical reasoning is crucial to robust and flexible high-level cognition. However, progress on computational models of analogy has been impeded by our inability to quickly and accurately collect large numbers (100+) of semantically annotated texts. The Story Workbench is a tool that facilitates such annotation by using natural language processing techniques to make a guess at the annotation, followed by approval, correction, and elaboration of that guess by a human annotator. Central to this approach is the use of a sophisticated graphical user interface that can guide even an untrained annotator through the annotation process. I describe five desiderata that govern the design of the Story Workbench, and demonstrate how each principle was fulfilled in the current implementation. The Story Workbench enables numerous experiments that previously were prohibitively laborious, of which I describe three currently underway in my lab.