Systems | Information | Learning | Optimization
 

Learning to see in the wild. Should SSL be truly unsupervised?

Speaker: Pedro Morgado
Abstract: Self-supervised learning (SSL) aims to eliminate one of the major bottlenecks in representation learning – the need for human annotations. As a result, SSL holds the promise to learn representations from data in the wild, i.e., without the need for finite, curated, and static datasets. However, can current self-supervised learning approaches be effective in this setup? In this talk, I will show that the answer is no. While learning in the wild, we expect to see a continuous stream of potentially non-IID data. Yet, state-of-the-art approaches struggle to learn from such data distributions. They are inefficient (both computationally and in terms of data complexity), exhibit signs of forgetting, incapable of modeling dynamics, and result in inferior representation quality. The talk will introduce our recent efforts in tackling these issues.

Bio: Pedro Morgado is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison. Prior to joining UW-Madison, he was a post-doctoral fellow at Carnegie Mellon University, working with Abhinav Gupta. He earned his Ph.D. degree from the University of California San Diego advised by Prof. Nuno Vasconcelos, and his B.Sc. and M.Sc. degrees from Universidade de Lisboa, Portugal. His main research interests lie in computer vision and deep learning, focusing on multi-modal learning and self-supervised learning.

October 5 @ 12:30
12:30 pm (1h)

Orchard View Room, Virtual

Pedro Morgado

Video