Track:
Contents
Downloads:
Abstract:
Most inductive inference algorithms are designed to work most effectively when their training data contain completely specified labeled samples. In many environments, however, the person collecting the data may record the values of only some of the attributes, and so provide the learner with only partially specified samples. This can be modeled as a blocking process that hides the values of certain attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only superfluous attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of "superfluous-value blocking", and then demonstrate that these omissions can be quite useful, showing that a certain class that is seemingly hard to learn in the general PAC model -- viz., decision trees -- is trivial to learn in this setting. We also show how this model can be applied to the theory revision problem.