Systems | Information | Learning | Optimization
 

Nonconvex Distributed Optimization

We consider the distributed optimization problem where a group of agents seeks to cooperatively compute the optimizer of the average of local functions. To solve this problem, we propose a novel algorithm which adjusts the ratio between the number of communications and computations to achieve fast convergence. In particular, the iterates of our algorithm converge to the optimizer at the same rate as those of centralized gradient descent in terms of the number of computations. We provide variants of our algorithm for cases when the communication network is either time-varying and directed, or constant and undirected. We compare our algorithm with other known algorithms on a distributed target localization problem.
May 30 @ 16:00
4:00 pm (1h)

Inn Wisconsin Room, Memorial Union

Bryan Van Scoy