An Effective Hard Thresholding Method Based on Stochastic Variance Reduction for Nonconvex Sparse Learning

  • Guannan Liang UCONN
  • Qianqian Tong UCONN
  • Chunjiang Zhu UCONN
  • Jinbo Bi UCONN

Abstract

We propose a hard thresholding method based on stochastically controlled stochastic gradients (SCSG-HT) to solve a family of sparsity-constrained empirical risk minimization problems. The SCSG-HT uses batch gradients where batch size is pre-determined by the desirable precision tolerance rather than full gradients to reduce the variance in stochastic gradients. It also employs the geometric distribution to determine the number of loops per epoch. We prove that, similar to the latest methods based on stochastic gradient descent or stochastic variance reduction methods, SCSG-HT enjoys a linear convergence rate. However, SCSG-HT now has a strong guarantee to recover the optimal sparse estimator. The computational complexity of SCSG-HT is independent of sample size n when n is larger than 1/ε, which enhances the scalability to massive-scale problems. Empirical results demonstrate that SCSG-HT outperforms several competitors and decreases the objective value the most with the same computational costs.

Published
2020-04-03
Section
AAAI Technical Track: Constraint Satisfaction and Optimization