Learning General Latent-Variable Graphical Models with Predictive Belief Propagation

Authors

  • Borui Wang Stanford University
  • Geoffrey Gordon Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v34i04.6076

Abstract

Learning general latent-variable probabilistic graphical models is a key theoretical challenge in machine learning and artificial intelligence. All previous methods, including the EM algorithm and the spectral algorithms, face severe limitations that largely restrict their applicability and affect their performance. In order to overcome these limitations, in this paper we introduce a novel formulation of message-passing inference over junction trees named predictive belief propagation, and propose a new learning and inference algorithm for general latent-variable graphical models based on this formulation. Our proposed algorithm reduces the hard parameter learning problem into a sequence of supervised learning problems, and unifies the learning of different kinds of latent graphical models into a single learning framework, which is local-optima-free and statistically consistent. We then give a proof of the correctness of our algorithm and show in experiments on both synthetic and real datasets that our algorithm significantly outperforms both the EM algorithm and the spectral algorithm while also being orders of magnitude faster to compute.

Downloads

Published

2020-04-03

How to Cite

Wang, B., & Gordon, G. (2020). Learning General Latent-Variable Graphical Models with Predictive Belief Propagation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6118-6126. https://doi.org/10.1609/aaai.v34i04.6076

Issue

Section

AAAI Technical Track: Machine Learning