A major difficulty in Inductive Logic Programming (ILP) lies in the size of the hypothesis space, and in the number of possible matchings between a candidate hypothesis and a training example. This paper investigates the use of a stochastic bias in order to make induction a tractable problem. A goal-directed sampling mechanism of the matching space is implemented. The exhaus-tive exploration of the matching space is replaced by considering a fixed, user-supplied, number of samples. One thereby constructs a theory which is oldy approximately consistent, with a polyn-mial computational complexity in term of the size of the data. This approach significantly differs from the ILP approaches based on genetic algorithms or genetic programming, where stochastic sampling is directly used to explore the hypothesis space; in our approach, the sampling mechanism is co-bined to induction instead of replacing it. Experiments on the mutngeaicity problem fully validates the approach in terms of both predictive accuracy and computational cost.