The paper provides a follow the perturbed leader algorithm and analysis that can obtain better regret bounds when loss/gradient sequence is predictable. The proofs relies on using the equivalent regularization view of FTPL. The authors also provide an application of this result to providing a parallelizable algorithms for solving smooth convex concave saddlepoint games Most of the reviewers found the result interesting. Please address the concerns of the reviewer. Personally, I find the predictable sequences result interesting. However the application to minimax saddle point solving I find less interesting given the multiple calls to the linear optimization oracle. Its not clear how this competes with other FTRL methods that also obtain 1/T rates but without multiple steps in one iteration.