Abstract:
Despite their prevalence, our foundational understanding of deep neural networks (NNs) remains shallow. This talk aims to shed some light on questions of robustness, generalization and unsupervised learning for inverse problems.
We begin by examining NNs through the lens of sparse local Lipschitz functions. We will show how the characterization of the local neighborhoods where these functions are locally Lipschitz and sparse allows us to develop tighter bounds of their stability. In turn, this observation will lead to tighter adversarial robustness certificates as well as non-uniform, and often non-vacuous, generalization bounds.
The second part of the talk studies how NNs are deployed to solve inverse problems. We will provide a framework to develop learned proximal networks, which provide exact proximal operators for a data-driven nonconvex regularizer. We will see how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. These networks provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees, while provide a window into the resulting priors learned from data.
Bio:
Jeremias Sulam received his bioengineering degree from Universidad Nacional de Entre Ríos, Argentina, in 2013, and his PhD in Computer Science from the Technion – Israel Institute of Technology, in 2018. He joined the Biomedical Engineering Department at Johns Hopkins University in 2018 as an assistant professor, and he is also a core faculty at the Mathematical Institute for Data Science (MINDS) and the Center for Imaging Science at JHU. He is the recipient of the Best Graduates Award of the Argentinean National Academy of Engineering, and of the Early CAREER award of the National Science Foundation. His research interests focus on robust, interpretable, and trustworthy machine learning, biomedical imaging, and inverse problems.