The standard development of a dialogue system today involves the following steps: corpus collection and analysis, system development guided by corpus analysis, and finally, rigorous evaluation. Often, evaluation may involve more than one version of the system, for example when it is desirable to show the effect of system parameters that differ from one version to another. In this paper, we discuss the difficulties that small research groups face in pursuing the development of dialogue systems. The primary difficulties are the lack of adequate resources and the excessive amount of time it takes to see the systems through to a meaningful evaluation. As a case in point, we discuss our development and evaluation of a natural language generation component to improve the feedback provided by an interactive tutoring system. Our goal has been to use relatively inexpensive text structuring techniques to make aggregate content more fluent and comprehensible.