Learning Resource Allocation and Pricing for Cloud Profit Maximization

Authors

  • Bingqian Du The University of Hong Kong
  • Chuan Wu The University of Hong Kong
  • Zhiyi Huang The University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v33i01.33017570

Abstract

Cloud computing has been widely adopted to support various computation services. A fundamental problem faced by cloud providers is how to efficiently allocate resources upon user requests and price the resource usage, in order to maximize resource efficiency and hence provider profit. Existing studies establish detailed performance models of cloud resource usage, and propose offline or online algorithms to decide allocation and pricing. Differently, we adopt a blackbox approach, and leverage model-free Deep Reinforcement Learning (DRL) to capture dynamics of cloud users and better characterize inherent connections between an optimal allocation/pricing policy and the states of the dynamic cloud system. The goal is to learn a policy that maximizes net profit of the cloud provider through trial and error, which is better than decisions made on explicit performance models. We combine long short-term memory (LSTM) units with fully-connected neural networks in our DRL to deal with online user arrivals, and adjust the output and update methods of basic DRL algorithms to address both resource allocation and pricing. Evaluation based on real-world datasets shows that our DRL approach outperforms basic DRL algorithms and state-of-theart white-box online cloud resource allocation/pricing algorithms significantly, in terms of both profit and the number of accepted users.

Downloads

Published

2019-07-17

How to Cite

Du, B., Wu, C., & Huang, Z. (2019). Learning Resource Allocation and Pricing for Cloud Profit Maximization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7570-7577. https://doi.org/10.1609/aaai.v33i01.33017570

Issue

Section

AAAI Technical Track: Planning, Routing, and Scheduling