Gradient-Based Optimization for Bayesian Preference Elicitation

Authors

  • Ivan Vendrov Google Research
  • Tyler Lu Google Research
  • Qingqing Huang Google Research
  • Craig Boutilier Google Research

DOI:

https://doi.org/10.1609/aaai.v34i06.6592

Abstract

Effective techniques for eliciting user preferences have taken on added importance as recommender systems (RSs) become increasingly interactive and conversational. A common and conceptually appealing Bayesian criterion for selecting queries is expected value of information (EVOI). Unfortunately, it is computationally prohibitive to construct queries with maximum EVOI in RSs with large item spaces. We tackle this issue by introducing a continuous formulation of EVOI as a differentiable network that can be optimized using gradient methods available in modern machine learning computational frameworks (e.g., TensorFlow, PyTorch). We exploit this to develop a novel Monte Carlo method for EVOI optimization, which is much more scalable for large item spaces than methods requiring explicit enumeration of items. While we emphasize the use of this approach for pairwise (or k-wise) comparisons of items, we also demonstrate how our method can be adapted to queries involving subsets of item attributes or “partial items,” which are often more cognitively manageable for users. Experiments show that our gradient-based EVOI technique achieves state-of-the-art performance across several domains while scaling to large item spaces.

Downloads

Published

2020-04-03

How to Cite

Vendrov, I., Lu, T., Huang, Q., & Boutilier, C. (2020). Gradient-Based Optimization for Bayesian Preference Elicitation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10292-10301. https://doi.org/10.1609/aaai.v34i06.6592

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty