Systems | Information | Learning | Optimization
 

SILO: Neural Operators for Scientific Applications: Learning on Function Spaces

Abstract: Applying AI to scientific problems like weather forecasting and aerodynamics is an active research area, promising to accelerate model development and enable faster scientific discovery and engineering design. In practice, these applications require learning spatiotemporal processes and solutions to partial differential equations on continuous domains at multiple scales – …

SILO: Self-Improving Transformers: Overcoming Length Generalization Challenges

Abstract: Large language models can perform algorithmic tasks through test-time computation but struggle to generalize far beyond the task difficulty of the training distribution. These limitations manifest across even simple tasks like arithmetic, string manipulation, and maze solving, where transformers learn shortcuts rather than the underlying algorithms. While prior solutions …

SILO: Theory for Diffusion Models

Abstract: In this talk I will survey our recent efforts to develop a rigorous theory for understanding diffusion generative modeling. The first part will cover discretization analyses that prove that diffusion models can approximately sample from arbitrary probability distributions provided one can have a sufficiently accurate estimate for the score …

SILO: Learning Dynamics for Nash and Coarse Correlated Equilibria in Bimatrix Games

Abstract: In this talk, we will focus on learning in two-player games. First, we will provide a brief introduction to the possible behaviors of learning algorithms and mention various techniques that have been extensively used to guarantee convergence to Nash equilibria in zero-sum games. Finally, we will demonstrate how these …

SILO: Beyond Decoder-Only Next Token Prediction

Abstract: This talk presents two distinct approaches that expand the potential of Transformer architectures beyond the traditional decoder-only, causal-attention models for next-token prediction. In the first half, we will examine looped Transformers with an adaptive iteration mechanism, demonstrating that these models can learn highly length-generalizable solutions for algorithmic tasks. The …