Dynamic Learning of Sequential Choice Bandit Problem under Marketing Fatigue

Authors

  • Junyu Cao University of California, Berkeley
  • Wei Sun IBM Research

DOI:

https://doi.org/10.1609/aaai.v33i01.33013264

Abstract

Motivated by the observation that overexposure to unwanted marketing activities leads to customer dissatisfaction, we consider a setting where a platform offers a sequence of messages to its users and is penalized when users abandon the platform due to marketing fatigue. We propose a novel sequential choice model to capture multiple interactions taking place between the platform and its user: Upon receiving a message, a user decides on one of the three actions: accept the message, skip and receive the next message, or abandon the platform. Based on user feedback, the platform dynamically learns users’ abandonment distribution and their valuations of messages to determine the length of the sequence and the order of the messages, while maximizing the cumulative payoff over a horizon of length T. We refer to this online learning task as the sequential choice bandit problem. For the offline combinatorial optimization problem, we show a polynomialtime algorithm. For the online problem, we propose an algorithm that balances exploration and exploitation, and characterize its regret bound. Lastly, we demonstrate how to extend the model with user contexts to incorporate personalization.

Downloads

Published

2019-07-17

How to Cite

Cao, J., & Sun, W. (2019). Dynamic Learning of Sequential Choice Bandit Problem under Marketing Fatigue. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3264-3271. https://doi.org/10.1609/aaai.v33i01.33013264

Issue

Section

AAAI Technical Track: Machine Learning