Robust statistics provides a powerful framework for quantifying the behavior of estimators when data are observed subject to imperfections that deviate from standard modeling assumptions. In this talk, we highlight recent work involving statistical theory for robust estimators in high dimensions, with applications to compressed sensing and graphical model estimation. Central to our analysis are (1) a basic understanding of which classes of robust loss functions to employ in order to protect against particular deviations; and (2) rigorous theoretical statements connecting particular characteristics of the loss function to robustness properties of the resulting estimator. Such ideas have been well-studied in low dimensions, but only recently brought to bear in high-dimensional contexts. Due to the fact that many attractive robust loss functions are inherently nonconvex, we leverage results in optimization theory to devise computationally efficient methods for obtaining local or global optima with provably good properties. This further elucidates some of the mostly heuristic methods employed in low-dimensional robust estimation. We conclude by discussing challenges and open questions arising in the mostly unexplored territory lying between robust and high-dimensional statistics.
Discovery Building, Orchard View Room
Po Ling Loh