Computational efficiency is a central concern in the design of knowledge representation systems. Compiling a knowledge-base into a more tractable form has been suggested as a way around the inherent intractability of many representation formalisms. Because not all theories can be put into an equivalent tractable form, Selrnan and Kautz (1991) have suggested compiling a theory into upper and lower bounds (one logically weaker, the other logical stronger) that approximate the original information. A central question in this approach is how well the bounds capture the original knowledge. This question is inherently empirical. We present a detailed empirical evaluation of the compilation of two kinds of theories: computationally challenging randomly generated theories, and propositional encodings of planning problems. Our results show that one can answer a very high percentage of queries even using unit clause bounds, which are much easier to compute than more general tractable approximations. Furthermore, we demonstrate that many of the queries that can be answered by the bounds are expensive to answer using only the original theory: in other words, knowledge compilation does not just "skim off" easy queries. In fact, we show substantial total computational savings in using the bounds together with the original theory to answer all queries (with no errors) from a large benchmark set, over using the original theory alone. This study suggests that knowledge compilation may indeed be a practical approach for dealing with intractability in knowledge representation systems.