Systems | Information | Learning | Optimization
 

SILO: Finite‑Time Bounds for Robust Reinforcement Learning with Linear Function Approximation

Abstract

Robust reinforcement learning (RL) focuses on designing optimal policies from data for MDPs with model uncertainties. Existing convergence guarantees for robust RL are either limited to tabular settings or use restrictive assumptions in the function approximation setting. We will present an RL algorithm for learning the optimal policy from data in the function approximation setting and provide finite‑time sample‑complexity bounds without requiring generative access to the underlying MDP model. Our algorithm uses a combination of ideas from distributionally robust optimization (DPO), two time-scale stochastic approximation, and traditional (non-robust) fitted value iteration and Q-learning. 

Bio

R. Srikant is the Director of the National Center for Supercomputing Applications, a Grainger Distinguished Chair in Engineering, and Professor of Electrical and Computer Engineering and the Coordinated Science Lab, all at the University of Illinois Urbana-Champaign. His research interests include machine learning, applied probability, stochastic control, and communication networks.

He is the recipient of the 2015 INFOCOM Achievement Award, the 2019 IEEE Koji Kobayashi Computers and Communications Award and the 2021 ACM SIGMETRICS Achievement Award. He has also received several Best Paper awards including the 2015 INFOCOM Best Paper Award, the 2017 Applied Probability Society Best Publication Award, and the 2017 WiOpt Best Paper award. He was the Editor-in-Chief of the IEEE/ACM Transactions on Networking from 2013-2017 and was an Area Editor for the Mathematics of Operations Research from 2023-2025.

March 4, 2026
12:30 pm (1h)

Orchard View Room

R. Srikant, UIUC