Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Allen Liu, Renato Leme, Jon Schneider
Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression. In this variant, we wish to find a linear function f:Rd→R that well approximates a set of points (xi,vi)∈Rd×[0,1] in the following sense: we receive a loss of vi when f(xi)>vi and a loss of vi−f(xi) when f(xi)≤vi. This arises naturally in the economic application of designing a pricing policy for differentiated items (where the loss is the gap between the performance of our policy and the optimal Myerson prices). We show that Myersonian regression is NP-hard to solve exactly and furthermore that no fully polynomial-time approximation scheme exists for Myersonian regression conditioned on the Exponential Time Hypothesis being true. In contrast to this, we demonstrate a polynomial-time approximation scheme for Myersonian regression that obtains an ϵm additive approximation to the optimal possible revenue and can be computed in time O(exp(poly(1/ϵ))\poly(m,n)). We show that this algorithm is stable and generalizes well over distributions of samples.