Reinforcing Neural Network Stability with Attractor Dynamics

Authors

  • Hanming Deng Shanghai Jiao Tong University
  • Yang Hua Queen's University Belfast
  • Tao Song Shanghai Jiao Tong University
  • Zhengui Xue Shanghai Jiao Tong University
  • Ruhui Ma Shanghai Jiao Tong University
  • Neil Robertson Queen's University Belfast
  • Haibing Guan Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v34i04.5787

Abstract

Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose relu-max attractor network (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet's attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.

Downloads

Published

2020-04-03

How to Cite

Deng, H., Hua, Y., Song, T., Xue, Z., Ma, R., Robertson, N., & Guan, H. (2020). Reinforcing Neural Network Stability with Attractor Dynamics. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3765-3772. https://doi.org/10.1609/aaai.v34i04.5787

Issue

Section

AAAI Technical Track: Machine Learning