Systems | Information | Learning | Optimization
 

Information Aggregation through Price and Learning Social Networks with Online Convex Programming and Parametric Dynamics

We describe a novel approach to online convex programming in dynamic settings. Many existing online learning methods are characterized via regret bounds that quantify the gap between the performance of the online algorithm relative to a comparator. In previous work, this comparator was either considered static over time, or admitted sublinear tracking regret bounds only when the comparator was piecewise constant or slowly varying. In contrast, the proposed method determines the best dynamical model from a parametric family to incorporate into the prediction process, and admits regret bounds which scale with the deviation of a comparator from that dynamical model. In other words, this approach can yield low regret relative to highly variable, dynamic comparators. This result is proved for loss functions corresponding to the negative log likelihood associated with an exponential family probability distribution, and several properties of the exponential family are exploited to ensure low regret. We take this formulation and show specifically how it applies to learning social network structure by learning parameters of a Hawkes Process, where instead of observing actual links between agents in the network, we can only observe actions of the agents.
March 12 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Eric Hall