Systems | Information | Learning | Optimization
 

Speeding up Permutation Testing in Neuroimaging and An Asynchronous Parallel Stochastic Coordinate Descent Algorithm

Speaker 1: Chris Hinrichs
Abstract: Multiple hypothesis testing is a significant problem in nearly all neuroimaging studies. In order to correct for this phenomena, we require a reliable estimate of the Family-Wise Error Rate (FWER). The well known Bonferroni correction method, while being simple to implement, is quite conservative, and can substantially under-power a study because it ignores dependencies between test statistics. Permutation testing, on the other hand, is an exact, non-parametric method of estimating the FWER for a given \alpha-threshold, but for acceptably low thresholds the computational burden can be prohibitive. In this paper, we observe that permutation testing in fact amounts to populating the columns of a very large matrix P. By analyzing the spectrum of this matrix, under certain conditions, we see that P has a low-rank plus a low-variance residual decomposition which makes it suitable for highly sub–sampled — on the order of 0.5% — matrix completion methods. Thus, we propose a novel permutation testing methodology which offers a large speedup, without sacrificing the fidelity of the estimated FWER. Our evaluations on four different neuroimaging datasets show that a computational speedup factor of roughly 50 can be achieved while recovering the FWER distribution up to very high accuracy. Further, we show that the estimated \alpha -threshold is also recovered faithfully, and is stable.

Speaker 2: Ji Liu
Abstract: We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate ($1/K$) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is $O(n^{1/2})$ in unconstrained optimization and $O(n^{1/4})$ in the
separable-constrained case, where $n$ is the dimension of the optimization variable. We describe results from implementation on 40-core processors.

October 16 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Chris Hinrichs, Ji Lui