A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. Consequently, in order to field real systems, planning practitioners must be able to provide: 1. tools to allow domain experts to create and debug their own planning knowledge bases; 2. tools for software verification, validation, and testing; and 3. tools to facilitate updates and maintenance of the planning knowledge base. This paper describes two types of tools for planning knowledge base development: static KB analysis techniques to detect certain classes of syntactic errors in a planning knowledge base; and completion analysis techniques, to interactively debug the planning knowledge base. We describe these knowledge development tools and describe empirical results documenting the usefulness of these tools.