Feature Variance Regularization: A Simple Way to Improve the Generalizability of Neural Networks

Authors

  • Ranran Huang Tsinghua University
  • Hanbo Sun Tsinghua University
  • Ji Liu Xilinx
  • Lu Tian Xilinx
  • Li Wang Xilinx
  • Yi Shan Xilinx
  • Yu Wang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i04.5840

Abstract

To improve the generalization ability of neural networks, we propose a novel regularization method that regularizes the empirical risk using a penalty on the empirical variance of the features. Intuitively, our approach introduces confusion into feature extraction and prevents the models from learning features that may relate to specific training samples. According to our theoretical analysis, our method encourages models to generate closer feature distributions for the training set and unobservable true data and minimize the expected risk as well, which allows the model to adapt to new samples better. We provide a thorough empirical justification of our approach, and achieves a greater improvement than other regularization methods. The experimental results show the effectiveness of our method on multiple visual tasks, including classification (CIFAR100, ImageNet, fine-grained datasets) and semantic segmentation (Cityscapes).

Downloads

Published

2020-04-03

How to Cite

Huang, R., Sun, H., Liu, J., Tian, L., Wang, L., Shan, Y., & Wang, Y. (2020). Feature Variance Regularization: A Simple Way to Improve the Generalizability of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4190-4197. https://doi.org/10.1609/aaai.v34i04.5840

Issue

Section

AAAI Technical Track: Machine Learning