Lifted Proximal Operator Machines

Authors

  • Jia Li Peking University
  • Cong Fang Peking University
  • Zhouchen Lin Peking University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014181

Abstract

We propose a new optimization method for training feedforward neural networks. By rewriting the activation function as an equivalent proximal operator, we approximate a feedforward neural network by adding the proximal operators to the objective function as penalties, hence we call the lifted proximal operator machine (LPOM). LPOM is block multiconvex in all layer-wise weights and activations. This allows us to use block coordinate descent to update the layer-wise weights and activations. Most notably, we only use the mapping of the activation function itself, rather than its derivative, thus avoiding the gradient vanishing or blow-up issues in gradient based training methods. So our method is applicable to various non-decreasing Lipschitz continuous activation functions, which can be saturating and non-differentiable. LPOM does not require more auxiliary variables than the layer-wise activations, thus using roughly the same amount of memory as stochastic gradient descent (SGD) does. Its parameter tuning is also much simpler. We further prove the convergence of updating the layer-wise weights and activations and point out that the optimization could be made parallel by asynchronous update. Experiments on MNIST and CIFAR-10 datasets testify to the advantages of LPOM.

Downloads

Published

2019-07-17

How to Cite

Li, J., Fang, C., & Lin, Z. (2019). Lifted Proximal Operator Machines. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4181-4188. https://doi.org/10.1609/aaai.v33i01.33014181

Issue

Section

AAAI Technical Track: Machine Learning