Classical stochastic optimization models usually involve expected-value objective functions. However, they do not apply to the minimization of a composition of two or multiple expected-value functions, i.e., the stochastic nested composition optimization problem.
Stochastic composition optimization finds wide application in estimation, risk-averse optimization, dimension reduction and reinforcement learning.
We propose a class of stochastic compositional first-order methods. We prove that the algorithms converge almost surely to an optimal solution for convex optimization problems (or a stationary point for nonconvex problems), as long as such a solution exists.
The convergence involves the interplay of two martingales with different timescales. We obtain rate of convergence results under various assumptions, and show that the algorithms achieve the optimal sample-error complexity in several important special cases. These results provide the best-known rate benchmarks for stochastic composition optimization.
We demonstrate its application to statistical estimation and reinforcement learning. In addition, we also introduce some recent developments on nonconvex statistical optimization.
Discovery Building, Orchard View Room