Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The paper provides a new strong attack against robust byzantine ML training algorithms. This attack seems to be effective across a wide range of settings, and hence is a useful contribution to the related byzantine ML literature. One strong suggestion made during the rebuttal discussion is that the authors need to clarify the setting that they operate on, eg some methods they attack may still be robust on their original setting, but are not against the proposed scheme. That is the threat model needs to be clarified in detail.