Learning with Dependent Data

Several important families of computational and statistical results in machine learning

and randomized algorithms rely on statistical independence of data. The scope of such results include the

Johnson-Lindenstrauss Lemma (JLL), the Restricted Isometry Property (RIP), regression models, and stochastic optimization. In this talk, we will discuss a new result on quadratic forms of random vectors which allows elements of the vector to be statistically dependent, even adaptively generated, in contrast to the need for independence in existing results. We will discuss a straightforward generalization of the result to random matrices with dependent rows and illustrate how JLL and RIP type results can be extended to such random matrices with dependent or adaptively chosen rows. We will briefly discuss how the ability to handle adaptively generated rows makes the result suitable for smoothed analysis of contextual linear bandits. We will conclude with a discussion on the need for incorporating such dependence in the analysis of stochastic optimization methods.

October 16 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Arindam Banerjee