Part of Advances in Neural Information Processing Systems 31 (NeurIPS 2018)
Zakaria Mhammedi, Robert C. Williamson
We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and mixing'' algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for \emph{mixable} losses using the \emph{aggregating algorithm}. The \emph{Generalized Aggregating Algorithm} (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the \emph{Shannon entropy} S. For a given entropy Φ, losses for which a constant regret is possible using the \textsc{GAA} are called Φ-mixable. Which losses are Φ-mixable was previously left as an open question. We fully characterize Φ-mixability and answer other open questions posed by \cite{Reid2015}. We show that the Shannon entropy S is fundamental in nature when it comes to mixability; any Φ-mixable loss is necessarily S-mixable, and the lowest worst-case regret of the \textsc{GAA} is achieved using the Shannon entropy. Finally, by leveraging the connection between the \emph{mirror descent algorithm} and the update step of the GAA, we suggest a new \emph{adaptive} generalized aggregating algorithm and analyze its performance in terms of the regret bound.