Adversarial Robustness From Well-Separated Data
Classifiers are known to be vulnerable to adversarial examples, which are imperceptible modifications of true inputs that lead to misclassification. This raises many concerns, and recent research aims to better understand this phenomenon. We make progress on two fronts: 1) We take a holistic look at adversarial examples for non-parametric …