Delay-Adaptive Distributed Stochastic Optimization

Authors

  • Zhaolin Ren Harvard University
  • Zhengyuan Zhou New York University
  • Linhai Qiu Google Inc.
  • Ajay Deshpande IBM Research
  • Jayant Kalagnanam IBM Research

DOI:

https://doi.org/10.1609/aaai.v34i04.6001

Abstract

In large-scale optimization problems, distributed asynchronous stochastic gradient descent (DASGD) is a commonly used algorithm. In most applications, there are often a large number of computing nodes asynchronously computing gradient information. As such, the gradient information received at a given iteration is often stale. In the presence of such delays, which can be unbounded, the convergence of DASGD is uncertain. The contribution of this paper is twofold. First, we propose a delay-adaptive variant of DASGD where we adjust each iteration's step-size based on the size of the delay, and prove asymptotic convergence of the algorithm on variationally coherent stochastic problems, a class of functions which properly includes convex, quasi-convex and star-convex functions. Second, we extend the convergence results of standard DASGD, used usually for problems with bounded domains, to problems with unbounded domains. In this way, we extend the frontier of theoretical guarantees for distributed asynchronous optimization, and provide new insights for practitioners working on large-scale optimization problems.

Downloads

Published

2020-04-03

How to Cite

Ren, Z., Zhou, Z., Qiu, L., Deshpande, A., & Kalagnanam, J. (2020). Delay-Adaptive Distributed Stochastic Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5503-5510. https://doi.org/10.1609/aaai.v34i04.6001

Issue

Section

AAAI Technical Track: Machine Learning