In many limited budget experimental settings, such as A/B testing or Protein design, there is a need for adaptive sampling to guide the discovery of as many true positives as possible subject to a low rate of false discoveries (i.e. false alarms). Like active learning for binary classification, this experimental design cannot be optimally chosen a priori but rather the data must be taken sequentially and adaptively in a closed loop. However, unlike active classification, finding a set with a high true positive rate and low false discovery rate (FDR) is not as well understood. In this talk, I’ll discuss some recent work (joint with Kevin Jamieson) on this problem along the way highlighting connections to multiple hypothesis testing, classification, combinatorial bandits, and FDR control.
March 6 @ 12:30
12:30 pm (1h)
Discovery Building, Orchard View Room