I’ll begin on a lighter note by talking about an issue raised in the criticism of a recent research study on ESP (extrasensory perception). Here is a link to NYT article on the controversy http://www.nytimes.com/2011/01/11/science/11esp.html?_r=1 . At issue is the meaning of what constitutes a statistically significant finding, and you might be surprised that even the experts don’t all agree.
Then I’ll discuss some research work I’ve done in collaboration with Rui Castro, Jarvis Haupt, and Matt Malloy concerning high-dimensional multiple testing problems. For example, consider testing to decide which of n>1 genes are differentially expressed in a certain disease. Suppose each test takes the form H0: X ~ N(0,1) vs. H1: X ~ N(m,1), for m>0, where N(m,1) is the Gaussian density with mean m and variance 1. When n is large, reliable decisions are possible only if the “signal amplitude” m exceeds sqrt(2 log n). This is simply because the magnitude of the largest of n independent N(0,1) noises is on the order of sqrt(2 log n). Non-sequential methods cannot overcome this curse of dimensionality. Sequential methods, however, are capable of breaking this curse by focusing measurement/experimentation resources on certain components at the expense of others. I will discuss a simple sequential method, in the spirit of classical sequential probability ratio testing, that is reliable as long as the signal amplitude satisfies m > sqrt(4 log s), where s is the number of tests where the truth is H1. In many applications, s is much much smaller than n and so the gains are “significant”.
Discovery Building, Orchard View Room