AAAI Publications, Thirtieth AAAI Conference on Artificial Intelligence

Font Size: 
Delay-Tolerant Online Convex Optimization: Unified Analysis and Adaptive-Gradient Algorithms
Pooria Joulani, Andras Gyorgy, Csaba Szepesvari

Last modified: 2016-02-21

Abstract


We present a unified, black-box-style method for developing and analyzing online convex optimization (OCO) algorithms for full-information online learning in delayed-feedback environments. Our new, simplified analysis enables us to substantially improve upon previous work and to solve a number of open problems from the literature. Specifically, we develop and analyze asynchronous AdaGrad-style algorithms from the Follow-the-Regularized-Leader (FTRL) and Mirror-Descent family that, unlike previous works, can handle projections and adapt both to the gradients and the delays, without relying on either strong convexity or smoothness of the objective function, or data sparsity. Our unified framework builds on a natural reduction from delayed-feedback to standard (non-delayed) online learning. This reduction, together with recent unification results for OCO algorithms, allows us to analyze the regret of generic FTRL and Mirror-Descent algorithms in the delayed-feedback setting in a unified manner using standard proof techniques. In addition, the reduction is exact and can be used to obtain both upper and lower bounds on the regret in the delayed-feedback setting.

Keywords


Machine Learning; Online Learning; Delayed Feedback; AdaGrad; Online Convex Optimization; FTRL; Mirror Descent;

Full Text: PDF