Abstract:
Markov Chain Monte Carlo (MCMC) and local-search optimization methods have been extensively used in the practice of statistical research for many decades now. However, their exact theoretical performance has been strikingly eluding even for simple parametric estimation tasks. This is in stark contrast to other classes of estimators such as low-degree polynomials or spectral methods where a much deeper theoretical understanding has been achieved over the recent years.
In this talk we are going to discuss several recent results that characterize the performance of a large class of (low-temperature) MCMC methods for a series of canonical estimation models, such as sparse tensor PCA and sparse linear regression. This characterization reveals that in some models (e.g., in sparse regression) MCMC methods achieve the performance of the conjecturably optimal polynomial-time estimators, but in some other cases (e.g, sparse PCA) they significantly underperform. If time permits, we are going to discuss how one can boost MCMC methods for sparse PCA to achieve polynomial-time optimality by appropriately extending the parameter space.
Bio:
Ilias Zadik is an Assistant Professor of Statistics and Data Science at Yale University. His research mainly focuses on the mathematical theory of statistics and its many connections with other fields such as computer science, probability theory, and statistical physics. His primary area of interest is the study of “computational-statistical trade-offs,” where the goal is to understand whether computational bottlenecks are unavoidable in modern statistical models or a limitation of currently used techniques. Prior to Yale, he held postdoctoral positions at MIT and NYU. He received his PhD from MIT in 2019.