Systems | Information | Learning | Optimization
 

Advances in Gradient Descent Methods for Non-Convex Optimization

With a flurry of recent research motivated by applications to machine learning, convergence of gradient descent methods for smooth non-convex unconstrained optimization is well understood in the centralized setting. In this talk I will discuss our progress towards understanding how convergence of gradient descent methods (including SGD and acceleration) is …