One of the key challenges in the real-world deployment of machine learning models is their brittleness: their performance significantly degrades when exposed to even small variations of their training environments.
How can we build ML models that are more robust?
In this talk, I will present a methodology for training models that are invariant to a broad family of worst-case input perturbations. I will then describe how such robust learning leads to models that learn fundamentally different data representations, and how this can be useful even outside the adversarial context. Finally, I will discuss model robustness beyond the worst-case: ways in which our models fail to generalize and how we can guide further progress on this front.”
“Dimitris Tsipras is a PhD student in the MIT EECS Department, advised by Aleksander Mądry. His work revolves around the reliability and robustness of machine learning systems, as well as the science of modern machine learning. He is currently being supported by a Facebook PhD Fellowship.