Abstract:
Relation extraction, the process of converting natural language text into structured knowledge, is increasingly important. Most successful techniques use supervised machine learning to generate extractors from sentences that have been manually labeled with the relations' arguments. Unfortunately, these methods require numerous training examples, which are expensive and time-consuming to produce. This paper presents ontological smoothing, a semi-supervisedtechnique that learns extractors for a set of minimally-labeledrelations. Ontological smoothing has three phases. First, itgenerates a mapping between the target relations and a backgroundknowledge-base. Second, it uses distant supervision toheuristically generate new training examples for the targetrelations. Finally, it learns an extractor from a combination of theoriginal and newly-generated examples. Experiments on 65 relationsacross three target domains show that ontological smoothing candramatically improve precision and recall, even rivaling fully supervisedperformance in many cases.
DOI:
10.1609/aaai.v26i1.8102