Machine learning and AI models hold great promise for accelerating scientific discovery by intelligently planning experiments and learning from their results. While existing models can be applied in this setting, I will argue in this talk for decision-aware models designed explicitly for adaptive experimentation tasks rather than traditional predictive accuracy. I will discuss two recent works on decision-aware models for Bayesian optimization, a framework for adaptive experimentation widely used for hyperparameter tuning, robotics, and drug discovery. The first work presents a case study on linear models, where a simple geometric modification that has no impact on supervised regression performance leads to orders-of-magnitude improvements in Bayesian optimization, even rivaling the performance of universal-approximating models. The second work introduces a framework for decision-aware model selection, demonstrating that models with inferior predictive performance can be superior for the sequential decision-making required by Bayesian optimization. I will conclude with future directions and discussing how decision-awareness can be incorporated into state-of-the-art models.
Bio:
Geoff Pleiss is an assistant professor in the Department of Statistics at the University of British Columbia, as well as a Canada CIFAR AI Chair affiliated with the Vector Institute. He earned a Ph.D. in Computer Science from Cornell University under the supervision of Prof. Kilian Weinberger. Geoff’s research group specializes in uncertainty quantification in machine learning, especially within the contexts of Bayesian optimization, spatiotemporal modelling, and scientific discovery. Additionally, he has co-founded many widely-used open source software projects, including the GPyTorch, LinearOperator, and CoLA libraries.
H. F. DeLuca Forum
Geoff Pleiss, University of British Columbia