Keywords: Adversarial attack, adversarial examples, adversarial robustness, trade-off
An adversarial example is a perturbed data point intended to fool the target networks. Neural networks even with high test accuracy can be very venerable to the adversarial attacks. While pursuing defense against these attacks, the trade-off between adversarial robustness and standard accuracy has consistently been observed by researchers who conjecture this trade-off may be inevitable. We question on this conjectured inevitability: Can we reconcile the two purposes, adversarial robustness and standard accuracy? Answering this question, we discover the subtle connection between the goal of adversarial robustness and that of statistical robustness. We leverage the discovery to propose a novel framework which reconciles the goals of robustness and accuracy. In particular, statistically robust approaches shed light on how to define the loss function in a data-adaptive way, which is very critical to this reconciliation. Our framework demonstrates its promising performance on simulated and real-world data, achieving high accuracy on both clean and adversarial examples.