Systems | Information | Learning | Optimization
 

Transformer for Reinforcement Learning and Imitation Learning

Speaker: Kimin Lee

Abstract: Transformers have been successful in various domains, such as natural language processing, computer vision and related fields. Motivated by the recent success of Transformers, many researchers have also leveraged the transformer architecture in reinforcement learning (RL) and imitation learning (IL). In this talk, I will introduce my recent work towards utilizing the transformer architecture in RL and IL. First, I will describe Preference Transformer: a transformer-based architecture for reward learning from human preference. I will also describe transformer-based policy architecture for handling multi-modal inputs (images and texts) to perform hard robotics manipulation.

Bio: Kimin Lee is a research scientist at Google. He is interested in the directions that enable scaling deep reinforcement learning to diverse and challenging domains — human-in-the-loop reinforcement learning, unsupervised reinforcement learning, and self-supervised learning. He completed his postdoctoral training at UC Berkeley (advised by Prof. Pieter Abbeel) and he received his Ph.D. from KAIST (advised by Prof. Jinwoo Shin). During Ph.D., he also interned and collaborated closely with Honglak Lee at University of Michigan.

October 21 @ 12:00
12:00 pm (1h)

CS Department, Room 1240, Virtual

Kimin Lee

Video