NeurIPS 2020
### Online Convex Optimization Over Erdos-Renyi Random Networks

### Meta Review

The paper considers the problem of distributed online convex optimization over Erdos Renyi radnom graphs and provides analysis of online gradient descent algorithm in the full information setting and one point and two point bandit query models. Both convex Lipschitz and strongly convex losses are considered. For full information case the regret bound is shown to match the rates for classical online gradient descent in terms of Horizon T. For two point bandit query the rates are shown to match the optimal rates in classical case as well. For one point query a possibly sub-optimal rate of T^3/4 and T^2/3 are shown for Lipschitz and strongly convex setting.
Overall the reviewers found the work interesting. One main concern that was raised was about motivation of the setting which the authors seemed to have answered somewhat satisfactorily.
For the one point query setting, in the classic setting (for now say convex Lipschitz set up), one can get sqrt{T} regret bound using Bernoulis kernel based on Bubeck et al. Is there any hope of distributing this, I dont see much discussion about this in the paper.