Systems | Information | Learning | Optimization
 

Balancing Computation and Communication in Distributed Optimization

Distributed optimization methods consist of two key aspects: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in the network requires some form of information exchange with its neighboring nodes (communication) and the computation of a (sub)-gradient (computation). The standard way of counting iteration numbers overlooks the complexity inside each iteration. Moreover, various applications deploying distributed methods may prefer different composition of computation and communication.

In this talk, we present a flexible algorithmic framework, where communication and computation steps are explicitly decomposed to enable algorithm customization for various applications. We apply this framework to first-order methods, such as the well-known distributed gradient descent, and show that the customized algorithms compare favorably relative to their base algorithms. Through this framework, we also obtained an exact first order method, where we propose adaptively increasing the number of communication steps, to obtain exact convergence to the optimal solution.

https://vimeo.com/242635604

November 8 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Ermin Wei