We report on an empirical study of supervised learning algorithms that induce models to resolve the meaning of ambiguous words in text. We find that the Naive Bayesian classifier is as accurate as several more sophisticated methods. This is a surprising result since Naive Bayes makes simplifying assumptions about disambiguation that are not realistic. However, our results correspond to a growing body of evidence that Naive Bayes acts as a satisficing model in a wide range of domains. We suggest that bias variance decompositions of classification error can be used to identify and develop satisficing models.