Pruning from Scratch

Authors

  • Yulong Wang Tsinghua University
  • Xiaolu Zhang Ant Financial
  • Lingxi Xie Huawei Noah's Ark Lab
  • Jun Zhou Ant Financial
  • Hang Su Tsinghua University
  • Bo Zhang Tsinghua University
  • Xiaolin Hu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i07.6910

Abstract

Network pruning is an important research field aiming at reducing computational costs of neural networks. Conventional approaches follow a fixed paradigm which first trains a large and redundant network, and then determines which units (e.g., channels) are less important and thus can be removed. In this work, we find that pre-training an over-parameterized model is not necessary for obtaining the target pruned structure. In fact, a fully-trained over-parameterized model will reduce the search space for the pruned structure. We empirically show that more diverse pruned structures can be directly pruned from randomly initialized weights, including potential models with better performance. Therefore, we propose a novel network pruning pipeline which allows pruning from scratch with little training overhead. In the experiments for compressing classification models on CIFAR10 and ImageNet datasets, our approach not only greatly reduces the pre-training burden of traditional pruning methods, but also achieves similar or even higher accuracy under the same computation budgets. Our results facilitate the community to rethink the effectiveness of existing techniques used for network pruning.

Downloads

Published

2020-04-03

How to Cite

Wang, Y., Zhang, X., Xie, L., Zhou, J., Su, H., Zhang, B., & Hu, X. (2020). Pruning from Scratch. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12273-12280. https://doi.org/10.1609/aaai.v34i07.6910

Issue

Section

AAAI Technical Track: Vision