Systems | Information | Learning | Optimization
 

Nonconvex optimization despite expensive, inexact or unavailable values

In this talk, we review several challenges that one can encounter while solving a smooth nonconvex optimization problem. For each of these issues, we develop a practical variant on the popular trust-region method with second-order guarantees.
We first consider the case of a large number of problem variables. By resorting to inexact linear algebra techniques, we develop an approach with best known second-order complexity guarantees, comparable to that of accelerated gradient methods. We then assume that the derivatives of the objective function cannot be accessed in an exact fashion, which occurs for instance when subsampling strategies are used. This naturally leads us to the field of derivative-free optimization, where the derivatives are not at all available for algorithmic purposes. We describe how a method in this class can be equipped with a second-order analysis, even though it relies exclusively on objective values. Finally, we discuss recent advances and open questions from stochastic derivative-free optimization, where even the function values cannot be obtained exactly.
August 22, 2019
4:00 pm (1h)

Memorial Union, State Room

Clement Royer