Speaker: Csaba Szepesvari
Abstract:
Bellman and his co-workers already in the 1960s have used linear function approximation to solve planning problems in Markov Decision Processes with continuous state spaces which they could not solve with simple discretization. Their hope was that this approach can work in general. While there has been much work along the same idea across a number of research communities, up to recent times a clear theoretical formulation and understanding of this problem was missing. In this talk, after going through a brief example, I will describe a formal framework to address this challenge, as well as the outcome of recent research, finishing with discussing of what is left open.