Systems | Information | Learning | Optimization
 

Estimating large-scale time series network models

Learning/estimating networks from multi-variate time series data is an important problem that arises in many applications including computational neuroscience, social network analysis, and any
others. Prior approaches either do not scale to multiple time series or rely on very restrictive parametric assumptions. In this talk, I present two approaches that provide learning guarantees for large-scale multi-variate time series. The first involves a parametric regularized GLM framework with non-linear clipping and saturation effects that incorporate and apply to various different low-dimensional structures. The second involves a non-parametric sparse additive model framework. Learning guarantees are provided in both cases and theoretical results are supported both by simulation results and performance comparisons on various data examples.
January 31 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Garvesh Raskutti