Systems | Information | Learning | Optimization
 

SILO: Bayesian Preference Exploration: Making Optimization Accessible to Non-Experts

Abstract: Optimization problems are everywhere — routing trucks, buying groceries, building a datacenter.  Yet optimization methodology is hard to use. It requires the user to write down their objective and constraints as mathematical functions.  In practice, the objective and constraints are unknown and must be tuned iteratively. An expert presents …

SILO: Qualia Optimization: Exploring Mathematical Formulations of AI Experience

Abstract: This talk explores the speculative question: what if current or future AI systems have qualia, such as pain or pleasure? It does so by assuming that AI systems might someday possess qualia—and that the quality of these subjective experiences should be considered alongside performance metrics. Concrete mathematical problem settings, …

SILO: First-Order Algorithms for Large-Scale Optimization

Abstract: It is well known that for nonconvex unconstrained optimization with Lipschitz smoothness, gradient descent and stochastic gradient descent are the optimal first-order algorithms in the deterministic and stochastic settings, respectively. This naturally raises two questions: In the constrained setting, is it possible to design algorithms that achieve the same …

SILO: Searching for architectures and BERT moments in specialized AI applications

Abstract:  In 2018, advances in architecture design and self-supervised learning led to the “BERT moment” in natural language processing, in which supervised learning workflows were permanently supplanted by the pretraining and fine-tuning of massive Transformer models. This spurred scientists in more specialized areas—e.g. genomics, satellite imaging, and time series forecasting—to develop …