Research in distributed AI has dealt with the interactions of agents, both cooperative and self-interested. The Recursive Modeling Method (RMM) is one method used for modeling rational self-interested agents. It assumes that knowledge is nested to a finite depth. An expansion of RMM, using a sigmoid function, was proposed with the hope that the solution concept of the new RMM would approximate the Nash EP in cases where RMMs knowledge approximated the common knowledge that is assumed by game theory. In this paper, we present a mathematical analysis of RMM with the sigmoid function and prove that it indeed tries to converge to the Nash EP. However, we also show how and why it fails to do so for most cases. Using this analysis, we argue for abandoning the sigmoid function as an implicit representation of uncertainty about the depth of knowledge, in favor of an explicit representation of the uncertainty. We also suggest other avenues of research that might give us other more efficient solution concepts which would also take into consideration the cost of computation and the expected gains.