Systems | Information | Learning | Optimization
 

SILO: Optimizing Optimization Methods, To and Beyond Minimax Optimality

Abstract:
This talk will take up the task of designing the provably best possible gradient method for smooth convex optimization. Methods with big-O optimal worst-case guarantees were (famously) discovered in the 80s by Nesterov. Methods with exactly minimax optimal worst-case guarantees were developed in the last decade. As a first result, we will present a “subgame perfect” method that is not only optimal against a worst-case problem instance but also optimally leverages all gradient information revealed at each step. This corresponds to being dynamically minimax optimal, or in game theory terms, provides us with a subgame perfect strategy for optimization. Besides attaining this high standard (beyond minimax optimality), our subgame perfect gradient method is also very fast. As a second result, we will address the problem of optimally selecting stepsizes for gradient descent. A construction and analysis of the (conjectured) minimax optimal stepsizes will be given. These optimized stepsizes arise from a beautiful fractal/dynamic programming construction.
Bio:
Ben Grimmer is an assistant professor of applied mathematics and statistics at Johns Hopkins University, supported by AFOSR, NSF, and as a Sloan Fellow. Prior to joining Hopkins, Ben did his PhD at Cornell, advised by James Renegar and Damek Davis, spending a couple of semesters with Google Research and Simons. Ben’s work primarily focuses on novel methods for the design and analysis of first-order methods. Some of his recent computer-assisted works have received substantial interest, being featured in popular mathematics venues like Quanta.

April 30, 2025
12:30 pm (1h)

Orchard View Room

Ben Grimmer, Johns Hopkins University

video