Most of today’s terminological representation systems implement hybrid reasoning architectures wherein a concept classifier is employed to reason about concept definitions, and a separate recognizer is invoked to compute instantiation relations between concepts and instances. Whereas most of the existing recognizer algorithms designed to maximally exploit the reasoning supplied by the concept classifier, LOOM has experimented with recognition strategies that place less emphasis on the classifier, and rely more on the abilities of LOOM’s backward chaining query facility. This paper presents the results of experiments that test the performance of the LOOM algorithms. These results suggest that, at least for some applications, the LOOM approach to recognition is likely to outperform the classical approach. They also indicate that for some applications, much better performance can be achieved by eliminating the recognizer entirely, in favor of a purely backward chaining architecture. We conclude that no single recognition algorithm or strategy is best for all applications, and that an architecture that offers a choice of inference modes is likely to be more useful than one that offers only a single style of reasoning.