robustness

Reassessing adversarial training with fixed data augmentation
A recent bug discovery on Pytorch+Numpy got me thinking: how much does this bug impact adversarial robustness?
Reassessing adversarial training with fixed data augmentation