#### Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

Part of Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)

Bibtek download is not availble in the pre-proceeding

#### Authors

*Dimitris Fotakis, Thanasis Lianeas, Georgios Piliouras, Stratis Skoulakis*

#### Abstract

<p>We consider a natural model of online preference aggregation, where sets of preferred items R<em>1, R</em>2, ..., R<em>t, ..., along with a demand for k</em>t items in each R<em>t, appear online. Without prior knowledge of (R</em>t, k<em>t), the learner maintains a ranking \pi</em>t aiming that at least k<em>t items from R</em>t appear high in \pi_t. This is a fundamental problem in preference aggregation with applications to e.g., ordering product or news items in web pages based on user scrolling and click patterns. </p>
<p>The widely studied Generalized Min-Sum-Set-Cover (GMSSC) problem serves as a formal model for the setting above. GMSSC is NP-hard and the standard application of no-regret online learning algorithms is computationally inefficient, because they operate in the space of rankings. In this work, we show how to achieve low regret for GMSSC in polynomial-time. We employ dimensionality reduction from rankings to the space of doubly stochastic matrices, where we apply Online Gradient Descent. A key step is to show how subgradients can be computed efficiently, by solving the dual of a configuration LP. Using deterministic and randomized rounding schemes, we map doubly stochastic matrices back to rankings with a small loss in the GMSSC objective.</p>