The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. In this talk, I introduce a simple algorithm, termed one-step thresholding (OST) algorithm, for model-order agnostic model selection in linear inference problems. I utilize two geometric measures of coherence, namely, worst-case coherence and average coherence, among the columns of a design matrix to provide an in-depth analysis of OST for model selection. One of the key insights offered by the ensuing analysis is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In particular, I show that OST has the ability to perform near-optimally for a number of generic (random or deterministic) matrices as long as the design matrix satisfies conditions that are easily computable in polynomial time. In addition, I talk about extensions of the OST model selection results to recovery of sparse signals. Finally, I present some findings into explicitly designing matrices with small average coherence, which is the key to guaranteeing that algorithms such as OST succeed.
October 13 @ 12:30
12:30 pm (1h)
Discovery Building, Orchard View Room