Formal results in grammatical inference clearly have some relevance to first language acquisition. Initial formalisations of the problem are however inapplicable to this particular situation. In this paper we construct an appropriate formalisation of the problem using a modern vocabulary drawn from statistical learning theory and grammatical inference and looking in detail at the relevant empirical facts. We claim that a variant of the Probably Approximately Correct (PAC) learning framework with positive samples only, modified so it is not completely distribution free is the appropriate choice. Some negative results derived from cryptographic problems appear to apply in this situation but the existence of algorithms with provably good performance shows how these negative results are not as strong as they initially appear, and that recent algorithms for learning regular languages partially satisfy our criteria. We conclude by speculating about the extension of these results beyond regular languages.