Systems | Information | Learning | Optimization
 

Learning under Uncertainty: Jumping out of the traditional stochastic optimization framework

Uncertainty penetrates in every corner of machine learning, ranging from data and adversary uncertainty, model uncertainty, all the way to dynamics uncertainty, even task uncertainty, and beyond. When faced with complicated machine learning tasks under various forms of uncertainty, the traditional empirical risk minimization framework, along with the rich off-the-shelf stochastic optimization algorithms, may no longer be applicable. This calls for new frameworks, algorithms, and principles for handling the uncertainty and making learning effective. In this talk, I will introduce two unified optimization frameworks that cover a wide spectrum of learning paradigms under uncertainty, including reinforcement learning, meta-learning, adversarial and Bayesian machine learning. (1) The first framework is called conditional stochastic optimization. This class of problems lies in between the traditional stochastic optimization and multistage stochastic programming. I will discuss the algorithms and sample complexities for solving this problem under various structural assumptions on smoothness and (non)convexity. (2) The second framework is based on min-max optimization. I will illustrate how learning tasks under uncertainty can be reduced to solving (non)convex-(non)concave minmax optimization problems. I will present some theoretical understanding and principled algorithms for min-max optimization for both convex and non-convex regimes. In addition, I will discuss how these optimization frameworks and perspectives can be leveraged to build theoretically-sound and practically-efficient algorithms for reinforcement learning, meta-learning, etc.
March 4, 2020
12:30 pm (1h)

Discovery Building, Orchard View Room

Niao He