Learning and Inference with Constraints

Lev Ratinov, Ming-Wei Chang,Nicholas Rizzolo, Dan Roth

Probabilistic modeling has been a dominant approach in Machine Learning research. As the field evolves, the problems of interest become increasingly challenging and complex. Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. However, incorporating non-local dependencies in a probabilistic model can lead to intractable training and inference. This paper presents Constraints Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. We further show that declarative constraints can be used to take advantage of unlabeled data when training the probabilistic model.

Subjects: 13. Natural Language Processing; Please choose a second document classification

Submitted: Apr 16, 2008


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.