Reinforcing Neural Network Stability with Attractor Dynamics

  • Hanming Deng Shanghai Jiao Tong University
  • Yang Hua Queen's University Belfast
  • Tao Song Shanghai Jiao Tong University
  • Zhengui Xue Shanghai Jiao Tong University
  • Ruhui Ma Shanghai Jiao Tong University
  • Neil Robertson Queen's University Belfast
  • Haibing Guan Shanghai Jiao Tong University

Abstract

Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose relu-max attractor network (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet's attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.

Published
2020-04-03
Section
AAAI Technical Track: Machine Learning