Class Type: Fall 2020
Enabling Fast and Robust Federated Learning
Ramtin Pedarsani
University of California, Santa Barbara
Physical Layer Fingerprinting By Design: Authentication and Security
Brian Sadler
Army Research Laboratory
The Mysteries of Adversarial Robustness for Non-parametric Methods and Neural Networks
Adversarial examples are small imperceptible perturbations to legitimate test inputs that cause machine learning classifiers to misclassify. While recent work has proposed many attacks and defenses, why exactly they arise still remains a mystery. In this talk, we’ll take a closer look at this question.We will look at non-parametric methods, …
The World Isn’t Flat: Towards Non-Euclidean Machine Learning
Is our familiar Euclidean space and its linear structure always the right place for machine learning? Recent research argues otherwise: it is not always needed and sometimes harmful, as demonstrated by a wave of exciting work. Starting with the notion of hyperbolic representations for hierarchical data, a major push has …
Data Re-weighting for Data Efficient Reinforcement Learning
Josiah Hanna
University of Texas at Austin
Learning from Societal Data: Theory and Practice
Machine learning algorithms for policy and decision making are becoming ubiquitous. In many societal applications, the inferences we can draw are often severely limited not by the number of subjects in the data but rather by limited observations available for each subject. My research focuses on tackling these limitations both …
Reliable Open-World Learning Against Out-of-distribution Data
The real world is open and full of unknowns, presenting significant challenges for AI systems that must reliably handle diverse, and sometimes anomalous inputs. Out-of-distribution (OOD) uncertainty arises when a machine learning model sees a test-time input that differs from its training data, and thus should not be predicted by …