In this paper, we first explore an intrinsic problem that exists in the theories induced by learning algorithms. Regardless of the selected algorithm, search methodology and hypothesis representation by which the theory is induced, one would expect the theory to make better predictions in some regions of the description space than others. We term the fact that an induced theory will have some regions of relatively poor performance the problem of locally low predictive accuracy. Having characterised the problem of locally low predictive accuracy in Instance-Based and Naive Bayesian classifiers, we propose to counter this problem using a composite learner that incorporates both classifiers. The strategy is to select an estimated better performing classifier to do the final prediction during classification. Empirical results from fifteen real-world domains show that the strategy is capable of partially overcoming the problem of locally low predictive accuracy and at the same time improving the overall performance of its constituent classifiers in most of the domains studied. The composite learner is also found to outperform three methods of stacked generalisation which include the cross-validation method in most of the experimental domains studied. We provide explanations of why the proposed composite learner performs better than stacked generalisation, and discern a condition under which the composite learner performs better than the better of its constituent classifiers.