Systems | Information | Learning | Optimization
 

Reliable Open-World Learning Against Out-of-distribution Data

The real world is open and full of unknowns, presenting significant challenges for AI systems that must reliably handle diverse, and sometimes anomalous inputs. Out-of-distribution (OOD) uncertainty arises when a machine learning model sees a test-time input that differs from its training data, and thus should not be predicted by the model. As ML is used for more safety-critical domains, the abilities to handle out-of-distribution data are central in building open-world learning systems. In this talk, I will talk about methods, challenges, and opportunities towards building ROWL (Reliable Open-World Learning). To tackle these challenges, I will first describe mechanisms that improve OOD uncertainty estimation by using calibrated softmax score and input processing. I will then talk about the recent advancement of an energy-based OOD detection framework, which produces a theoretically meaningful measurement that is aligned with the probability density of the input data. We show that energy score is less susceptible to softmax’s overconfidence issue, and leads to superior performance on common OOD detection benchmarks. Lastly, I will discuss how to robustify the out-of-distribution detection algorithms, in the presence of adversarial and physical-world plausible image perturbations.
October 28 @ 12:30
12:30 pm (1h)

Remote

Sharon Li

Video