Systems | Information | Learning | Optimization
 

Characterizing implicit bias of optimization in terms of optimization geometry

In this talk, we will explore the implicit bias of generic optimization methods and its connection to generalization in ill-posed optimization problems. We will specifically study optimizing underdetermined linear regression or separable linear classification problems using common optimization methods including, mirror descent, natural gradient descent and steepest descent with respect to different potentials and norms. We ask the question of whether the specific global minimum (among the many possible global minima) reached by the specific optimization algorithm can be characterized in terms of the geometry of the updates determined by the potential or norm, and independently of hyperparameter choices such as step size and momentum?

https://vimeo.com/262838037

March 21 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Suriya Gunasekar