Learning Plackett-Luce Mixtures from Partial Preferences

Authors

  • Ao Liu Rensselaer Polytechnic Institute
  • Zhibing Zhao Rensselaer Polytechnic Institute
  • Chao Liao Shanghai University of Finance and Economics
  • Pinyan Lu Shanghai University of Finance and Economics
  • Lirong Xia Rensselaer Polytechnic Institute

DOI:

https://doi.org/10.1609/aaai.v33i01.33014328

Abstract

We propose an EM-based framework for learning Plackett-Luce model and its mixtures from partial orders. The core of our framework is the efficient sampling of linear extensions of partial orders under Plackett-Luce model. We propose two Markov Chain Monte Carlo (MCMC) samplers: Gibbs sampler and the generalized repeated insertion method tuned by MCMC (GRIM-MCMC), and prove the efficiency of GRIM-MCMC for a large class of preferences.

Experiments on synthetic data show that the algorithm with Gibbs sampler outperforms that with GRIM-MCMC. Experiments on real-world data show that the likelihood of test dataset increases when (i) partial orders provide more information; or (ii) the number of components in mixtures of PlackettLuce model increases.

Downloads

Published

2019-07-17

How to Cite

Liu, A., Zhao, Z., Liao, C., Lu, P., & Xia, L. (2019). Learning Plackett-Luce Mixtures from Partial Preferences. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4328-4335. https://doi.org/10.1609/aaai.v33i01.33014328

Issue

Section

AAAI Technical Track: Machine Learning