Network as Regularization for Training Deep Neural Networks: Framework, Model and Performance

Authors

  • Kai Tian Fudan University
  • Yi Xu Fudan University
  • Jihong Guan Tongji University
  • Shuigeng Zhou Fudan University

DOI:

https://doi.org/10.1609/aaai.v34i04.6063

Abstract

Despite powerful representation ability, deep neural networks (DNNs) are prone to over-fitting, because of over-parametrization. Existing works have explored various regularization techniques to tackle the over-fitting problem. Some of them employed soft targets rather than one-hot labels to guide network training (e.g. label smoothing in classification tasks), which are called target-based regularization approaches in this paper. To alleviate the over-fitting problem, here we propose a new and general regularization framework that introduces an auxiliary network to dynamically incorporate guided semantic disturbance to the labels. We call it Network as Regularization (NaR in short). During training, the disturbance is constructed by a convex combination of the predictions of the target network and the auxiliary network. These two networks are initialized separately. And the auxiliary network is trained independently from the target network, while providing instance-level and class-level semantic information to the latter progressively. We conduct extensive experiments to validate the effectiveness of the proposed method. Experimental results show that NaR outperforms many state-of-the-art target-based regularization methods, and other regularization approaches (e.g. mixup) can also benefit from combining with NaR.

Downloads

Published

2020-04-03

How to Cite

Tian, K., Xu, Y., Guan, J., & Zhou, S. (2020). Network as Regularization for Training Deep Neural Networks: Framework, Model and Performance. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6013-6020. https://doi.org/10.1609/aaai.v34i04.6063

Issue

Section

AAAI Technical Track: Machine Learning