For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic. In this talk, we present NoisyMix, an inexpensive yet effective training scheme that judiciously combines data augmentations with stability training and noise injections to improve both model robustness and in-domain accuracy. We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. In particular, NoisyMix currently claims the top spot on the leaderboards of RobustBench, a standardised benchmark for adversarial robustness. Moreover, we provide theory to understand the implicit regularisation effects and robustness of NoisyMix.
Download the slides for this talk.Download ( PDF, 1706.63 MB)