Systems | Information | Learning | Optimization
 

The Mysteries of Adversarial Robustness for Non-parametric Methods and Neural Networks

Adversarial examples are small imperceptible perturbations to legitimate test inputs that cause machine learning classifiers to misclassify. While recent work has proposed many attacks and defenses, why exactly they arise still remains a mystery. In this talk, we’ll take a closer look at this question.We will look at non-parametric methods, …