Systems | Information | Learning | Optimization
 

SILO: Data Science Institute Talks

Bio Kyle Cranmer is a professor in the Physics Department with affiliate appointments in Computer Sciences and Statistics. He is also the David R. Anderson Director of the University of Wisconsin-Madison’s Data Science Institute (DSI). Professor Cranmer obtained his Ph.D. in Physics from the University of Wisconsin-Madison in 2005 and …

SILO: Bayesian Optimization Beyond the Black Box: Leveraging Computational Structure for Efficient and Scalable Decision-Making

Abstract Bayesian optimization (BO) is a principled framework for optimizing expensive, noisy objective functions, but traditional BO treats the system as a black box and learns only through input-output queries. In many scientific and engineering settings, this assumption is unnecessarily restrictive; valuable computational structure is often available, even if the …

SILO: First-Order Algorithms for Large-Scale Optimization

Abstract: It is well known that for nonconvex unconstrained optimization with Lipschitz smoothness, gradient descent and stochastic gradient descent are the optimal first-order algorithms in the deterministic and stochastic settings, respectively. This naturally raises two questions: In the constrained setting, is it possible to design algorithms that achieve the same …

SILO: Searching for architectures and BERT moments in specialized AI applications

Abstract:  In 2018, advances in architecture design and self-supervised learning led to the “BERT moment” in natural language processing, in which supervised learning workflows were permanently supplanted by the pretraining and fine-tuning of massive Transformer models. This spurred scientists in more specialized areas—e.g. genomics, satellite imaging, and time series forecasting—to develop …

SILO: Variational inference – reconciling statistical and convergence guarantees

Abstract: As a computational alternative to Markov chain Monte Carlo approaches, variational inference (VI) is becoming increasingly popular for approximating intractable posterior distributions in large-scale Bayesian models due to its comparable efficacy and superior efficiency. Several recent works provide theoretical justifications of VI by proving its statistical optimality for parameter …

SILO: Towards Secure Large Language Models: From Model to System

Abstract: We are witnessing a paradigm shift in AI, transitioning from deep learning models to the era of  Large Language Models (LLMs). This shift signifies a transformative advancement in AI, enabling it to be applied to diverse real-world safety-critical applications.   Despite these impressive achievements, a fundamental question remains: are …

SILO: Self-Improving Transformers: Overcoming Length Generalization Challenges

Abstract: Large language models can perform algorithmic tasks through test-time computation but struggle to generalize far beyond the task difficulty of the training distribution. These limitations manifest across even simple tasks like arithmetic, string manipulation, and maze solving, where transformers learn shortcuts rather than the underlying algorithms. While prior solutions …