Adapting to Smoothness: A More Universal Algorithm for Online Convex Optimization

Authors

  • Guanghui Wang Nanjing University
  • Shiyin Lu Nanjing University
  • Yao Hu Alibaba Group
  • Lijun Zhang Nanjing University

DOI:

https://doi.org/10.1609/aaai.v34i04.6081

Abstract

We aim to design universal algorithms for online convex optimization, which can handle multiple common types of loss functions simultaneously. The previous state-of-the-art universal method has achieved the minimax optimality for general convex, exponentially concave and strongly convex loss functions. However, it remains an open problem whether smoothness can be exploited to further improve the theoretical guarantees. In this paper, we provide an affirmative answer by developing a novel algorithm, namely UFO, which achieves O(√L*), O(d log L*) and O(log L*) regret bounds for the three types of loss functions respectively under the assumption of smoothness, where L* is the cumulative loss of the best comparator in hindsight, and d is dimensionality. Thus, our regret bounds are much tighter when the comparator has a small loss, and ensure the minimax optimality in the worst case. In addition, it is worth pointing out that UFO is the first to achieve the O(log L*) regret bound for strongly convex and smooth functions, which is tighter than the existing small-loss bound by an O(d) factor.

Downloads

Published

2020-04-03

How to Cite

Wang, G., Lu, S., Hu, Y., & Zhang, L. (2020). Adapting to Smoothness: A More Universal Algorithm for Online Convex Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6162-6169. https://doi.org/10.1609/aaai.v34i04.6081

Issue

Section

AAAI Technical Track: Machine Learning