Inefficiency of K-FAC for Large Batch Size Training

  • Linjian Ma University of California at Berkeley
  • Gabe Montague University of California at Berkeley
  • Jiayu Ye University of California at Berkeley
  • Zhewei Yao University of California at Berkeley
  • Amir Gholami University of California at Berkeley
  • Kurt Keutzer University of California at Berkeley
  • Michael Mahoney University of California at Berkeley

Abstract

There have been several recent work claiming record times for ImageNet training. This is achieved by using large batch sizes during training to leverage parallel resources to produce faster wall-clock training times per training epoch. However, often these solutions require massive hyper-parameter tuning, which is an important cost that is often ignored. In this work, we perform an extensive analysis of large batch size training for two popular methods that is Stochastic Gradient Descent (SGD) as well as Kronecker-Factored Approximate Curvature (K-FAC) method. We evaluate the performance of these methods in terms of both wall-clock time and aggregate computational cost, and study the hyper-parameter sensitivity by performing more than 512 experiments per batch size for each of these methods. We perform experiments on multiple different models on two datasets of CIFAR-10 and SVHN. The results show that beyond a critical batch size both K-FAC and SGD significantly deviate from ideal strong scaling behaviour, and that despite common belief K-FAC does not exhibit improved large-batch scalability behavior, as compared to SGD.

Published
2020-04-03
Section
AAAI Technical Track: Machine Learning