NeurIPS 2020

Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms

Meta Review

The paper considers online convex optimization with time-varying stage costs and additional switching costs, with long-term predictions incorporated to improve the online performance. A new algorithm, Receding Horizon Inexact Gradient is given, with guarantees on its regret under a general model of prediction errors. The reviewers generally liked the paper, and appreciated a well-motivated problem, a clear contribution to the theory and rigorous analysis of the proposed algorithm, as well as a general prediction error model (as opposed to idealistic models used in the past work). On the other hand, there were several issues raised by the reviewers: lack of explanation for the regret bound and the analysis in the main body of the paper, lacking comparison to the past work, assumption that all stage functions share the same form, smoothness and strong convexity of the stage costs. Most of these concerns appear to be clarified in the rebuttal.