Research in intelligent narrative technologies has recently experienced a significant resurgence. As the field matures, devising principled evaluation methodologies will become increasingly important to ensure continued progress. Because of the complexities of narrative phenomena, as well as the inherent subjectivity of narrative experiences, effectively evaluating intelligent narrative technologies poses significant challenges. In this paper, we present STORYEVAL, an evaluation framework for empirically studying computational models of narrative generation. Drawing on evaluation methodologies from cognitive science, human-computer interaction, and natural language processing, as well as techniques that have begun to emerge in the narrative technologies community, STORYEVAL consists of four complementary tools for evaluating both interactive and non-interactive narrative generation: Narrative Metrics, Cognitive-Affective Studies, Director- centric Studies, and Extrinsic Narrative Evaluations. We discuss the benefits and limitations of each family of techniques and illustrate their application with example narrative generators drawn from the field.