Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks

Authors

  • Jaedeok Kim Samsung Electronics
  • Chiyoun Park Samsung Electronics
  • Hyun-Joo Jung Samsung Electronics
  • Yoonsuck Choe Samsung Electronics

DOI:

https://doi.org/10.1609/aaai.v34i04.5872

Abstract

Architecture optimization, which is a technique for finding an efficient neural network that meets certain requirements, generally reduces to a set of multiple-choice selection problems among alternative sub-structures or parameters. The discrete nature of the selection problem, however, makes this optimization difficult. To tackle this problem we introduce a novel concept of a trainable gate function. The trainable gate function, which confers a differentiable property to discrete-valued variables, allows us to directly optimize loss functions that include non-differentiable discrete values such as 0-1 selection. The proposed trainable gate can be applied to pruning. Pruning can be carried out simply by appending the proposed trainable gate functions to each intermediate output tensor followed by fine-tuning the overall model, using any gradient-based training methods. So the proposed method can jointly optimize the selection of the pruned channels while fine-tuning the weights of the pruned model at the same time. Our experimental results demonstrate that the proposed method efficiently optimizes arbitrary neural networks in various tasks such as image classification, style transfer, optical flow estimation, and neural machine translation.

Downloads

Published

2020-04-03

How to Cite

Kim, J., Park, C., Jung, H.-J., & Choe, Y. (2020). Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4452-4459. https://doi.org/10.1609/aaai.v34i04.5872

Issue

Section

AAAI Technical Track: Machine Learning