Systems | Information | Learning | Optimization
 

SILO: Understanding Deep Learning through Optimization Bias

Abstract:

How can models with more parameters than training examples generalize well, and generalize even better when we add even more parameters?  In recent years, it is becoming increasingly clear that such generalization ability comes from the optimization bias, or implicit bias, of the training procedures.  In this talk, I will survey our work from the past several years on highlighting the role of such implicit bias and understanding deep learning through it.

 

Biography:

Nati (Nathan) Srebro is a professor at the Toyota Technological Institute at Chicago, with cross-appointments at the University of Chicago’s Department of Computer Science, and Committee on Computational and Applied Mathematics. He obtained his PhD from the Massachusetts Institute of Technology in 2004, and previously was a postdoctoral fellow at the University of Toronto, a visiting scientist at IBM, and an associate professor at the Technion.

Dr. Srebro’s research encompasses methodological, statistical and computational aspects of machine learning, as well as related problems in optimization. Some of Srebro’s significant contributions include work on learning “wider” Markov networks, introducing the use of the nuclear norm for machine learning and matrix reconstruction, work on fast optimization techniques for machine learning, and on the relationship between learning and optimization. His current interests include understanding deep learning through a detailed understanding of optimization, distributed and federated learning, algorithmic fairness and practical adaptive data analysis

October 25, 2023
12:30 pm (1h)

Orchard View Room

Nati Srebro, TTIC

Video