Sensitivity Analysis of Deep Neural Networks

Authors

  • Hai Shu The University of Texas Anderson Cancer Center
  • Hongtu Zhu University of North Carolina, Chapel Hill

DOI:

https://doi.org/10.1609/aaai.v33i01.33014943

Abstract

Deep neural networks (DNNs) have achieved superior performance in various prediction tasks, but can be very vulnerable to adversarial examples or perturbations. Therefore, it is crucial to measure the sensitivity of DNNs to various forms of perturbations in real applications. We introduce a novel perturbation manifold and its associated influence measure to quantify the effects of various perturbations on DNN classifiers. Such perturbations include various external and internal perturbations to input samples and network parameters. The proposed measure is motivated by information geometry and provides desirable invariance properties. We demonstrate that our influence measure is useful for four model building tasks: detecting potential ‘outliers’, analyzing the sensitivity of model architectures, comparing network sensitivity between training and test sets, and locating vulnerable areas. Experiments show reasonably good performance of the proposed measure for the popular DNN models ResNet50 and DenseNet121 on CIFAR10 and MNIST datasets.

Downloads

Published

2019-07-17

How to Cite

Shu, H., & Zhu, H. (2019). Sensitivity Analysis of Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4943-4950. https://doi.org/10.1609/aaai.v33i01.33014943

Issue

Section

AAAI Technical Track: Machine Learning