Communication-Efficient Stochastic Gradient MCMC for Neural Networks

Authors

  • Chunyuan Li Microsoft Research
  • Changyou Chen State University of New York at Buffalo
  • Yunchen Pu Facebook
  • Ricardo Henao Duke University
  • Lawrence Carin Duke University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014173

Abstract

Learning probability distributions on the weights of neural networks has recently proven beneficial in many applications. Bayesian methods such as Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) offer an elegant framework to reason about model uncertainty in neural networks. However, these advantages usually come with a high computational cost. We propose accelerating SG-MCMC under the masterworker framework: workers asynchronously and in parallel share responsibility for gradient computations, while the master collects the final samples. To reduce communication overhead, two protocols (downpour and elastic) are developed to allow periodic interaction between the master and workers. We provide a theoretical analysis on the finite-time estimation consistency of posterior expectations, and establish connections to sample thinning. Our experiments on various neural networks demonstrate that the proposed algorithms can greatly reduce training time while achieving comparable (or better) test accuracy/log-likelihood levels, relative to traditional SG-MCMC. When applied to reinforcement learning, it naturally provides exploration for asynchronous policy optimization, with encouraging performance improvement.

Downloads

Published

2019-07-17

How to Cite

Li, C., Chen, C., Pu, Y., Henao, R., & Carin, L. (2019). Communication-Efficient Stochastic Gradient MCMC for Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4173-4180. https://doi.org/10.1609/aaai.v33i01.33014173

Issue

Section

AAAI Technical Track: Machine Learning