Systems | Information | Learning | Optimization
 

Covariance thresholding, kernel random matrices and sparse PCA

In Sparse Principal Component Analysis (PCA) we wish to reconstruct a low-rank matrix from noisy observations, under sparsity assumptions on the factors recovered. Johnstone and Lu (2004) formalized these assumptions in the ‘spiked covariance model’, wherein we observe $n$ i.i.d. samples from a $p$ dimensional Gaussian distribution $N(0, I + b v v^t)$. The population principal component $v$ is called the ‘spike’ and is assumed to be sparse in a certain basis. Johnstone and Lu also proposed a simple scheme that consistently estimates the support of $v$ provided its size is smaller than $\sqrt{n/\log p}$.

Despite a considerable amount of work over the past ten years, there was no computationally efficient scheme that improved over this guarantee. We analyze Covariance Thresholding, a simple algorithm proposed by Krautgamer, Nadler and Vilenchik (2013) and prove that it recovers supports up to size $\sqrt{n}$. Under a certain complexity conjecture, it is impossible to significantly improve this guarantee.

The main technical component in our analysis involves a new bound on the spectral norm of kernel random matrices.

This is joint work with Andrea Montanari.

May 15 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Yash Deshpande