Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory

Authors

  • Arghya Roy Chaudhuri Indian Institute of Technology Bombay
  • Shivaram Kalyanakrishnan Indian Institute of Technology Bombay

DOI:

https://doi.org/10.1609/aaai.v34i06.6566

Abstract

Regret minimisation in stochastic multi-armed bandits is a well-studied problem, for which several optimal algorithms have been proposed. Such algorithms depend on (sufficient statistics of) the empirical reward distributions of the arms to decide which arm to pull next. In this paper, we consider the design of algorithms that are constrained to store statistics from only a bounded number of arms. For bandits with a finite set of arms, we derive a sub-linear upper bound on the regret that decreases with the “arm memory” size M. For instances with a large, possibly infinite, set of arms, we show a sub-linear bound on the quantile regret.

Our problem formulation generalises that of Liau et al. (2018), who fix M = O(1), and so do not obtain bounds that depend on M. More importantly, our algorithms keep exploration and exploitation tightly coupled, without a dedicated exploration phase as employed by Liau et al. (2018). Although this choice makes our analysis harder, it leads to much-improved practical performance. For bandits with a large number of arms and no known structure on the rewards, our algorithms serve as a viable option. Unlike many other approaches to restrict the memory of bandit algorithms, our algorithms do not need any additional technical assumptions.

Downloads

Published

2020-04-03

How to Cite

Roy Chaudhuri, A., & Kalyanakrishnan, S. (2020). Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10085-10092. https://doi.org/10.1609/aaai.v34i06.6566

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty