This paper focuses on an empirical framework called calibratable adversarial training. The proposal allows for user-specified calibration of desired robustness level, depending on test-time use case, without re-training. The philosophy behind sounds quite interesting to me, namely, "Once-for-All" training. This philosophy leads to a novel algorithm design I have never seen, i.e., CAT and CATS. The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please carefully address R5' comments in the final version. Namely, the unified formulation should be strictly precise.