Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods

Authors

  • Hao Yuan Washington State University
  • Yongjun Chen Washington State University
  • Xia Hu Texas A&M University
  • Shuiwang Ji Texas A&M University

DOI:

https://doi.org/10.1609/aaai.v33i01.33015717

Abstract

Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose an approach to investigate the meaning of hidden neurons of the convolutional neural network (CNN) models. We first employ saliency map and optimization techniques to approximate the detected information of hidden neurons from input sentences. Then we develop regularization terms and explore words in vocabulary to interpret such detected information. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden spatial locations. Additionally, we show that our approach can describe the decision procedure of deep NLP models.

Downloads

Published

2019-07-17

How to Cite

Yuan, H., Chen, Y., Hu, X., & Ji, S. (2019). Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5717-5724. https://doi.org/10.1609/aaai.v33i01.33015717

Issue

Section

AAAI Technical Track: Machine Learning